text
stringlengths 29
850k
|
|---|
#!/usr/bin/python
# Copyright 2009-2012 - Luca Freschi <l.freschi@gmail.com>
# This file is part of QDC.
# QDC is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import sys
import os.path
import commands
def checks():
msg=":: Preliminary checks..."
if os.path.isdir('results'):
status, out = commands.getstatusoutput('rm -rf results')
if status !=0:
msg=msg+"[ERROR]\n"
return status, msg, out
if os.path.isfile('engine'):
status, out = commands.getstatusoutput('rm engine')
if status !=0:
msg=msg+"[ERROR]\n"
return status, msg, out
status, out = commands.getstatusoutput('mkdir results')
if status !=0:
msg=msg+"[ERROR]\n"
return status, msg, out
status=0
out="[ok]\n"
return status, msg, out
def compile_model(file):
msg=":: Compilation of the model..."
parse_file='./parser '+str(file)
status, out = commands.getstatusoutput(parse_file)
if status !=0:
msg=msg+"[ERROR]\n"
return status, msg, out
status, out = commands.getstatusoutput('make engine')
if status !=0:
msg=msg+"[ERROR]\n"
return status, msg, out
status=0
out="[ok]\n"
return status, msg, out
def simulation():
msg=":: Simulation..."
status, out = commands.getstatusoutput('./engine '+' 0.1')
if status !=0:
msg=msg+"[ERROR]\n"
return status, msg, out
status=0
out="[ok]\n"
return status, msg, out
def write_results(i,model):
msg=":: Output files..."
base_name=os.path.basename(model)
current=base_name+'_reagents'+str(i)+'.csv'
cmd='mv '+model+'_reagents.csv '+'results/'+current
status, out = commands.getstatusoutput(cmd)
if status !=0:
msg=msg+"[ERROR]\n"
return status, msg, out
current=base_name+'_reactions'+str(i)+'.csv'
cmd='mv '+model+'_reactions.csv '+'results/'+current
status, out = commands.getstatusoutput(cmd)
if status !=0:
msg=msg+"[ERROR]\n"
return status, msg, out
current=base_name+'_reactioncounts'+str(i)+'.csv'
cmd='mv '+model+'_reactioncounts.csv '+'results/'+current
status, out = commands.getstatusoutput(cmd)
if status !=0:
msg=msg+"[ERROR]\n"
return status, msg, out
current=base_name+'_log'+str(i)+'.txt'
cmd='mv '+model+'_log.txt '+'results/'+current
status, out = commands.getstatusoutput(cmd)
if status !=0:
msg=msg+"[ERROR]\n"
return status, msg, out
status=0
out="[ok]\n"
return status, msg, out
if __name__ == '__main__':
if len(sys.argv) != 3:
print "usage: %s <file_input> <number_of_simulations>" % sys.argv[0]
sys.exit()
file=sys.argv[1]
print file+"\n"
#I cut the extension
model=os.path.splitext(file)[0]
print model+"\n"
n_of_simulations=int(sys.argv[2])
s, m, o =checks()
print m+o
if s !=0:
sys.exit()
s, m, o=compile_model(file)
print m+o
if s !=0:
sys.exit()
for i in range(n_of_simulations):
print "Run "+str(i+1)
s, m, o=simulation()
print m+o
if s !=0:
break
s, m, o=write_results(i,model)
print m+o
if s!=0:
break
|
Marketing logistics involve planning, delivering and controlling the flow of physical goods, marketing materials and information from the producer to the market. The aim is to meet customer demands while still making a satisfactory profit. To maintain your competitive edge, you need to create an effective strategy regarding product, price, place and promotion. These four functions of marketing logistics help the organization to reach the target customers and deliver the products or services sold by the organization to these customers.
The four functions of marketing logistics are product, price, place and promotion.
One function of logistics marketing is finding out who your customer is and how to get the product or service to the customer. Each customer can have individualized needs so the logistical services provided may vary from customer to customer. Regardless of these differences, the customers expects 100 percent conformance and assured reliability at all times with every transaction. The goals of this aspect of marketing logistics include filling the order, on-time delivery, precise invoicing and zero damage.
An organization bases pricing decisions on both internal and external factors. Marketing logistics must recognize price drivers. The profile of the customer, the product and the type of order are factors that drive the price. These changes are not typically controlled by marketing logistics. However, marketing logistics must react to these factors and understand how the factors affect customers’ decisions.
Discounts for quantities and the related logistical cost structure can impact the price the customer will ultimately pay for the product or service. Additional factors driving price include the shipping costs based on the size, weight and distance the organization will ship the item. Further, the size of the manufacturing run, labor costs and the types, quantities and quality of the materials used in the manufacturing process can affect price.
Promotion is another important aspect of an organization’s marketing logistics process. When bringing a product to market, the organization must coordinate the logistics of the various marketing materials. For example, the art department might design the artwork for the product's box and an outside supplier might manufacture the boxes with the artwork. Marketing logistics can help to ensure that all of these entities work together and produce the marketing materials needed to sell the product.
The function of place in marketing logistics allows the organization to simplify the transactions between a logistics provider and the customer. The organization must execute logistics in such a way that the customer is not aware of the complexities involved in the logistics process. For the customer, the output is always more important than the process. The organization should, therefore, never expose the backroom processes involved with logistics delivery to the customer.
Also the location of the factory, warehouse and customer can greatly impact the marketing logistics process by increasing or reducing costs. For example, locating a factory in Mexico might reduce the labor costs associated with a product. However, at the same time locating the factory in Mexico might increase the shipping costs and negate any cost savings.
Bass, Brian. "Four Functions of Marketing Logistics." Small Business - Chron.com, http://smallbusiness.chron.com/four-functions-marketing-logistics-21833.html. 25 January 2019.
|
# coding=utf-8
"""
Binary class deconstruct, reconstruct packet
"""
import copy
class Binary(object):
@staticmethod
def deconstruct_packet(packet):
"""
Replaces every bytearray in packet with a numbered placeholder.
:param packet:
:return: dict with packet and list of buffers
"""
buffers = []
packet_data = packet.get('data', None)
def _deconstruct_packet(data):
if type(data) is bytearray:
place_holder = {
'_placeholder': True,
'num': len(buffers)
}
buffers.append(data)
return place_holder
if type(data) is list:
new_data = []
for d in data:
new_data.append(_deconstruct_packet(d))
return new_data
if type(data) is dict:
new_data = {}
for k, v in data.items():
new_data[k] = _deconstruct_packet(v)
return new_data
return data
pack = copy.copy(packet)
pack['data'] = _deconstruct_packet(packet_data)
pack['attachments'] = len(buffers)
return {
'packet': pack,
'buffers': buffers
}
@staticmethod
def reconstruct_packet(packet, buffers):
def _reconstruct_packet(data):
if type(data) is dict:
if '_placeholder' in data:
buf = buffers[data['num']]
return buf
else:
for k, v in data.items():
data[k] = _reconstruct_packet(v)
return data
if type(data) is list:
for i in xrange(len(data)):
data[i] = _reconstruct_packet(data[i])
return data
return data
packet['data'] = _reconstruct_packet(packet['data'])
del packet['attachments']
return packet
@staticmethod
def remove_blobs(data):
def _remove_blobs(obj, cur_key=None, containing_obj=None):
if not obj:
return obj
try:
# Try to read it as a file
buf = bytearray(obj.read())
if containing_obj is not None and cur_key is not None:
containing_obj[cur_key] = buf
else:
return buf
except AttributeError:
pass
if type(obj) is list:
for index, item in enumerate(obj):
_remove_blobs(item, index, obj)
if type(obj) is dict:
for k, v in obj.items():
_remove_blobs(v, k, obj)
return obj
blobless_data = _remove_blobs(data)
return blobless_data
|
The course aims to provide students with basic understanding of modern Bayesian inference methods. The course will emphasize and discuss methods which have application in robotics, natural language processing, data mining, web search.
The details of the summer 2013 version of the course is available at https://ufal.mff.cuni.cz/modern-bayesian-methods-machine-learning-summer-2013/.
Introduction to Bayesian Machine learning and Bayesian networks.
Belief propagation and loopy belief propagation in Bayesian Networks.
Variational Bayes and expectation propagation.
C. M. Bishop: Pattern Recognition and Machine Learning, vol. 4, no. 4. Springer, 2006, p. 738.
K. Murphy: Machine Learning: a Probabilistic Perspective, the MIT Press (2012).
D. Barber: Bayesian Reasoning and Machine Learning, Cambridge University Press (2012), available freely on the web.
D. MacKay: Information Theory, Inference, and Learning Algorithms, Cambridge University Press (2003), available freely on the web at http://www.inference.phy.cam.ac.uk/mackay/itila/. It is also include video lectures.
|
#!/usr/bin/env python
# =========================================================================
#
# Copyright NumFOCUS
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0.txt
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# =========================================================================
"""
This script demonstrates the use of the Exhaustive optimizer in the
ImageRegistrationMethod to estimate a good initial rotation position.
Because gradient descent base optimization can get stuck in local
minima, a good initial transform is critical for reasonable
results. Search a reasonable space on a grid with brute force may be a
reliable way to get a starting location for further optimization.
The initial translation and center of rotation for the transform is
initialized based on the first principle moments of the intensities of
the image. Then in either 2D or 3D a Euler transform is used to
exhaustively search a grid of the rotation space at a certain step
size. The resulting transform is a reasonable guess where to start
further registration.
"""
import SimpleITK as sitk
import sys
import os
from math import pi
def command_iteration(method):
if (method.GetOptimizerIteration() == 0):
print("Scales: ", method.GetOptimizerScales())
print(f"{method.GetOptimizerIteration():3} = {method.GetMetricValue():7.5f} : {method.GetOptimizerPosition()}")
if len(sys.argv) < 4:
print("Usage:", sys.argv[0], "<fixedImageFilter> <movingImageFile>",
"<outputTransformFile>")
sys.exit(1)
fixed = sitk.ReadImage(sys.argv[1], sitk.sitkFloat32)
moving = sitk.ReadImage(sys.argv[2], sitk.sitkFloat32)
R = sitk.ImageRegistrationMethod()
R.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50)
sample_per_axis = 12
if fixed.GetDimension() == 2:
tx = sitk.Euler2DTransform()
# Set the number of samples (radius) in each dimension, with a
# default step size of 1.0
R.SetOptimizerAsExhaustive([sample_per_axis // 2, 0, 0])
# Utilize the scale to set the step size for each dimension
R.SetOptimizerScales([2.0 * pi / sample_per_axis, 1.0, 1.0])
elif fixed.GetDimension() == 3:
tx = sitk.Euler3DTransform()
R.SetOptimizerAsExhaustive([sample_per_axis // 2, sample_per_axis // 2,
sample_per_axis // 4, 0, 0, 0])
R.SetOptimizerScales(
[2.0 * pi / sample_per_axis, 2.0 * pi / sample_per_axis,
2.0 * pi / sample_per_axis, 1.0, 1.0, 1.0])
# Initialize the transform with a translation and the center of
# rotation from the moments of intensity.
tx = sitk.CenteredTransformInitializer(fixed, moving, tx)
R.SetInitialTransform(tx)
R.SetInterpolator(sitk.sitkLinear)
R.AddCommand(sitk.sitkIterationEvent, lambda: command_iteration(R))
outTx = R.Execute(fixed, moving)
print("-------")
print(outTx)
print(f"Optimizer stop condition: {R.GetOptimizerStopConditionDescription()}")
print(f" Iteration: {R.GetOptimizerIteration()}")
print(f" Metric value: {R.GetMetricValue()}")
sitk.WriteTransform(outTx, sys.argv[3])
if ("SITK_NOSHOW" not in os.environ):
resampler = sitk.ResampleImageFilter()
resampler.SetReferenceImage(fixed)
resampler.SetInterpolator(sitk.sitkLinear)
resampler.SetDefaultPixelValue(1)
resampler.SetTransform(outTx)
out = resampler.Execute(moving)
simg1 = sitk.Cast(sitk.RescaleIntensity(fixed), sitk.sitkUInt8)
simg2 = sitk.Cast(sitk.RescaleIntensity(out), sitk.sitkUInt8)
cimg = sitk.Compose(simg1, simg2, simg1 // 2. + simg2 // 2.)
sitk.Show(cimg, "ImageRegistrationExhaustive Composition")
|
At Williams Group, you will find a broad range of Approved Used BMW models, including the X Series. We have many years of experience working with the marque and can advise each of our customers to ensure they select a car that is right for them. Each of our models are in excellent condition, and with the X Series, you can enjoy both a smart and practical drive.
There are great options with the BMW X Series, as drivers can choose from hatchback, estate or coupé style models. Renowned for a sporty stance, the X Series showcases a choice of body colours with matching door handles, bumpers and door mirrors for a sleek and seamless finish. Other features such as LED daytime running lights and adaptive LED headlights ensure that your visibility is always at its best and provide an executive look.
Modern driving is a given with the X Series thanks to a host of sophisticated technological and convenient features. From built-in satellite navigation and smartphone connectivity, to a touchscreen console and cruise control, you can enjoy a relaxing and fun drive in the X Series.
Several transmission options are available with the strong 2.0-litre engine in diesel or petrol, manual or automatic. Speak to a BMW expert at Williams for more information on the specific models we have on offer, and we can also discuss our flexible finance plans.
If you see an X Series that you like, call the team at Williams now and we can arrange a test drive at your earliest convenience. You can also enquire online and we will answer your query promptly.
|
# 2016 Red Hat Inc.
#
# This file is part of Ansible
#
# Ansible is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Ansible is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Ansible. If not, see <http://www.gnu.org/licenses/>.
#
from __future__ import (absolute_import, division, print_function)
__metaclass__ = type
import re
import json
from ansible.module_utils._text import to_text, to_bytes
from ansible.plugins.terminal import TerminalBase
from ansible.errors import AnsibleConnectionFailure
class TerminalModule(TerminalBase):
terminal_stdout_re = [
re.compile(br"[\r\n]?[\w+\-\.:\/\[\]]+(?:\([^\)]+\)){,3}(?:>|#) ?$"),
re.compile(br"\[\w+\@[\w\-\.]+(?: [^\]])\] ?[>#\$] ?$")
]
terminal_stderr_re = [
re.compile(br"% ?Error: (?:(?!\bdoes not exist\b)(?!\balready exists\b)(?!\bHost not found\b)(?!\bnot active\b).)*$"),
re.compile(br"% ?Bad secret"),
re.compile(br"invalid input", re.I),
re.compile(br"(?:incomplete|ambiguous) command", re.I),
re.compile(br"connection timed out", re.I),
re.compile(br"'[^']' +returned error code: ?\d+"),
]
def on_authorize(self, passwd=None):
if self._get_prompt().endswith('#'):
return
cmd = {u'command': u'enable'}
if passwd:
cmd['prompt'] = to_text(r"[\r\n]?password: $", errors='surrogate_or_strict')
cmd['answer'] = passwd
try:
self._exec_cli_command(to_bytes(json.dumps(cmd), errors='surrogate_or_strict'))
except AnsibleConnectionFailure:
raise AnsibleConnectionFailure('unable to elevate privilege to enable mode')
# in dellos6 the terminal settings are accepted after the privilege mode
try:
self._exec_cli_command(b'terminal length 0')
except AnsibleConnectionFailure:
raise AnsibleConnectionFailure('unable to set terminal parameters')
def on_deauthorize(self):
prompt = self._get_prompt()
if prompt is None:
# if prompt is None most likely the terminal is hung up at a prompt
return
if prompt.strip().endswith(b')#'):
self._exec_cli_command(b'end')
self._exec_cli_command(b'disable')
elif prompt.endswith(b'#'):
self._exec_cli_command(b'disable')
|
Grab your Easter baskets because we've just taken egg deliciousness to the next level!
These adorable t-bars feature a pastel watercolour egg design with accents of pink, fuchsia, purple and blue, perfectly teamed with neutral grey suede details and white rubber soles.
Never has there been a more EGG-celent Easter shoe option for your little ones!
This is a new PK style shoe, the T-BAR! Because of the non-elastic ankle we recommend wearing socks with this style till shoe is worn in.
Velcro ankle closure, we recommend wearing socks until the shoes are well worn in.
|
from django.conf.urls.defaults import url, patterns
from zorna.site import views
urlpatterns = patterns('',
url(r'^options/$',
views.admin_list_options,
name='admin_list_options'),
url(r'^registration/$',
views.admin_site_registration,
name='admin_site_registration'),
url(r'^version/$',
views.admin_site_version,
name='admin_site_version'),
url(r'^alerts/$',
views.admin_list_alerts,
name='admin_list_alerts'),
url(r'^alerts/add/$',
views.admin_add_alert,
name='admin_add_alert'),
url(r'^edit/(?P<alert>\d+)/$',
views.admin_edit_alert,
name='admin_edit_alert'),
url(r'^calendar/categories/$',
views.admin_list_calendar_categories,
name='admin_list_calendar_categories'),
url(r'^calendar/categories/add/$',
views.admin_add_calendar_category,
name='admin_add_calendar_category'),
url(r'^calendar/categories/edit/(?P<category>\d+)/$',
views.admin_edit_calendar_category,
name='admin_edit_calendar_category'),
)
|
L'Ascenseur Végétal is an association (our registration number- a.k.a. SIRET - is 811 961 077 00021).
.Personal data - Access & Storage.
According to the French law "Informatique & Libertés" dated January 6th, 1978 (modified in 2004), L'Ascenseur Végétal's website has been registered (simplified application) by the legal body named "CNIL" on May 13th, 2013. Registration No. for the CNIL application : 1672598.
In France, personal data are proteced by laws No. 78-87 dated January 6th, 1978, No. 2004-801 dated August 6th, 2004, and article L. 226-13 from the legal code ("Code pénal") and European Directive from October 24th, 1995.
According to the clauses in articles 38 and following of law 78-17 dated january 6th, 1978 regarding Information Technology, digital records and individual liberties, all users have the right to access, modify and oppose any of their personal data.
When navigating L'Ascenseur Végétal's website, the following data may be gathered: the URL of the link through which the user reached L'Ascenseur Végétal's website, the user's Internet Service Provider and IP address.
L'Ascenseur Végétal only gathers personal data for the performance of services offered on the website (Newsletter, creation of an account necessary to place an order). The user provides this information in full awareness, specifically as they will have to type the information themself.
No personal data, from any user of L'Ascenseur Végétal's website, will be published, exchanged, transfered or sold (to any third-party, in any form and on any media) without the rightful owner of these data being informed beforehand.
L'Ascenseur Végétal's website features hyperlinks to other websites. However, the editor of L'Ascenseur Végétal's website cannot verify the content of the other websites that may be reached in this manner, and he will not bear any responsibilities regarding problems that may occur resulting from the use of such hyperlinks.
Navigation on L'Ascenseur Végétal's website may trigger the installation of cookies on the user's computer. A cookie is a small-size file that will record information regarding the pages visited on a given website but does not permit the identification of the user. Data gathered in this manner facilitate navigation on the given website during a later access and are also used to measure the number and frequence of the user's visits.
Refusing to install a cookie may prevent some features of the website to operate correctly, but the user has to possibility to configure his/her computer so that it refuses cookie installation.
Any dispute that may arise from the use of L'Ascenseur Végétal's website is submitted to French Laws. The relevant court in Bordeaux, France will be the sole jurisdiction to make decisions regarding any such dispute.
The security of the www.ascenseurvegetal.com website is extremely important to L'Ascenseur Végétal, however we cannot guarantee its complete integrity and the absence of malicious modification (intrusion, virus) by a third-party.
Regular maintenance operations of our IT systems and other technical elements related to the use of the Internet (e.g. network overload) mais impact the performance level of L'Ascenseur Végétal's website and the features and functionalities offered may at times be unavailable.
L'Ascenseur Végétal's editor may - for any reason and at its sole discretion - cancel, modify, interrupt access to the entire website (or any part) of L'Ascenseur Végétal, without prior notice. This may include content, features, hours of operation.
We would like to remind you that the security of your data and the integrity of communications on the Internet can never be guaranteed.
L'Ascenseur Végétal disclaims all responsibility regarding any consequences related to technical failures in relation with the use of the website www.ascenseurvegetal.com, including but not limited to: difficulty to access the website, interruption of availability of the website, difficulty to transfer data.
|
import re
import unicodedata
import json
from django.core.exceptions import ImproperlyConfigured
from django.core.validators import validate_email, ValidationError
from django.core import urlresolvers
from django.contrib.sites.models import Site
from django.db.models import FieldDoesNotExist
from django.db.models.fields import (DateTimeField, DateField,
EmailField, TimeField)
from django.utils import six, dateparse
from django.utils.datastructures import SortedDict
from django.core.serializers.json import DjangoJSONEncoder
try:
from django.utils.encoding import force_text
except ImportError:
from django.utils.encoding import force_unicode as force_text
try:
import importlib
except:
from django.utils import importlib
def _generate_unique_username_base(txts, regex=None):
username = None
regex = regex or '[^\w\s@+.-]'
for txt in txts:
if not txt:
continue
username = unicodedata.normalize('NFKD', force_text(txt))
username = username.encode('ascii', 'ignore').decode('ascii')
username = force_text(re.sub(regex, '', username).lower())
# Django allows for '@' in usernames in order to accomodate for
# project wanting to use e-mail for username. In allauth we don't
# use this, we already have a proper place for putting e-mail
# addresses (EmailAddress), so let's not use the full e-mail
# address and only take the part leading up to the '@'.
username = username.split('@')[0]
username = username.strip()
username = re.sub('\s+', '_', username)
if username:
break
return username or 'user'
def get_username_max_length():
from .account.app_settings import USER_MODEL_USERNAME_FIELD
if USER_MODEL_USERNAME_FIELD is not None:
User = get_user_model()
max_length = User._meta.get_field(USER_MODEL_USERNAME_FIELD).max_length
else:
max_length = 0
return max_length
def generate_unique_username(txts, regex=None):
from .account.app_settings import USER_MODEL_USERNAME_FIELD
username = _generate_unique_username_base(txts, regex)
User = get_user_model()
max_length = get_username_max_length()
i = 0
while True:
try:
if i:
pfx = str(i + 1)
else:
pfx = ''
ret = username[0:max_length - len(pfx)] + pfx
query = {USER_MODEL_USERNAME_FIELD + '__iexact': ret}
User.objects.get(**query)
i += 1
except User.MultipleObjectsReturned:
i += 1
except User.DoesNotExist:
return ret
def valid_email_or_none(email):
ret = None
try:
if email:
validate_email(email)
if len(email) <= EmailField().max_length:
ret = email
except ValidationError:
pass
return ret
def email_address_exists(email, exclude_user=None):
from .account import app_settings as account_settings
from .account.models import EmailAddress
emailaddresses = EmailAddress.objects
if exclude_user:
emailaddresses = emailaddresses.exclude(user=exclude_user)
ret = emailaddresses.filter(email__iexact=email).exists()
if not ret:
email_field = account_settings.USER_MODEL_EMAIL_FIELD
if email_field:
users = get_user_model().objects
if exclude_user:
users = users.exclude(pk=exclude_user.pk)
ret = users.filter(**{email_field+'__iexact': email}).exists()
return ret
def import_attribute(path):
assert isinstance(path, six.string_types)
pkg, attr = path.rsplit('.', 1)
ret = getattr(importlib.import_module(pkg), attr)
return ret
def import_callable(path_or_callable):
if not hasattr(path_or_callable, '__call__'):
ret = import_attribute(path_or_callable)
else:
ret = path_or_callable
return ret
try:
from django.contrib.auth import get_user_model
except ImportError:
# To keep compatibility with Django 1.4
def get_user_model():
from . import app_settings
from django.db.models import get_model
try:
app_label, model_name = app_settings.USER_MODEL.split('.')
except ValueError:
raise ImproperlyConfigured("AUTH_USER_MODEL must be of the"
" form 'app_label.model_name'")
user_model = get_model(app_label, model_name)
if user_model is None:
raise ImproperlyConfigured("AUTH_USER_MODEL refers to model"
" '%s' that has not been installed"
% app_settings.USER_MODEL)
return user_model
def get_current_site(request=None):
"""Wrapper around ``Site.objects.get_current`` to handle ``Site`` lookups
by request in Django >= 1.8.
:param request: optional request object
:type request: :class:`django.http.HttpRequest`
"""
# >= django 1.8
if request and hasattr(Site.objects, '_get_site_by_request'):
site = Site.objects.get_current(request=request)
else:
site = Site.objects.get_current()
return site
def resolve_url(to):
"""
Subset of django.shortcuts.resolve_url (that one is 1.5+)
"""
try:
return urlresolvers.reverse(to)
except urlresolvers.NoReverseMatch:
# If this doesn't "feel" like a URL, re-raise.
if '/' not in to and '.' not in to:
raise
# Finally, fall back and assume it's a URL
return to
def serialize_instance(instance):
"""
Since Django 1.6 items added to the session are no longer pickled,
but JSON encoded by default. We are storing partially complete models
in the session (user, account, token, ...). We cannot use standard
Django serialization, as these are models are not "complete" yet.
Serialization will start complaining about missing relations et al.
"""
ret = dict([(k, v)
for k, v in instance.__dict__.items()
if not (k.startswith('_') or callable(v))])
return json.loads(json.dumps(ret, cls=DjangoJSONEncoder))
def deserialize_instance(model, data):
ret = model()
for k, v in data.items():
if v is not None:
try:
f = model._meta.get_field(k)
if isinstance(f, DateTimeField):
v = dateparse.parse_datetime(v)
elif isinstance(f, TimeField):
v = dateparse.parse_time(v)
elif isinstance(f, DateField):
v = dateparse.parse_date(v)
except FieldDoesNotExist:
pass
setattr(ret, k, v)
return ret
def set_form_field_order(form, fields_order):
if isinstance(form.fields, SortedDict):
form.fields.keyOrder = fields_order
else:
# Python 2.7+
from collections import OrderedDict
assert isinstance(form.fields, OrderedDict)
form.fields = OrderedDict((f, form.fields[f])
for f in fields_order)
def build_absolute_uri(request, location, protocol=None):
uri = request.build_absolute_uri(location)
if protocol:
uri = protocol + ':' + uri.partition(':')[2]
return uri
def get_form_class(forms, form_id, default_form):
form_class = forms.get(form_id, default_form)
if isinstance(form_class, six.string_types):
form_class = import_attribute(form_class)
return form_class
def get_request_param(request, param, default=None):
return request.POST.get(param) or request.GET.get(param, default)
|
Stock up on those must-haves for their Everyday wear! Shop and save on Best in Basics.
Get Up to 50% Off Hundreds of wear, select styles for baby Boy & Girl, toddler Girl & Boy, Girls & Boys during The Great Big Sale. Based on original retail price. Excludes new arrivals.
Sneak Peek! Make a Splash! Shop and save on Swimsuits, Rushguard sets, swim trunks and more.
CLEARANCE SAVINGS! Get Oshkosh B’gosh Tlc Real Magic Tunic for $5.99 CAD, Reg. Price – $18 CAD.
Get 25% Off select outerwear & winter boots.
Looks to Love! Find Fresh New Outfits perfectly styled for every occassion!
Get 20% Off when you spend $50 and sign up for the Carter’s newsletter. Limited time.
CLEARANCE SAVINGS! Get Oshkosh B’gosh Suspender Jeans – Derby Wash for toddler boy from $12.99 CAD, Reg. Price – $24 CAD.
|
# -*- coding: utf-8 -*-
# Generated by Django 1.10.4 on 2017-01-21 23:32
from __future__ import unicode_literals
from django.db import migrations, models
import django.db.models.deletion
class Migration(migrations.Migration):
initial = True
dependencies = [
]
operations = [
migrations.CreateModel(
name='Brand',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
],
),
migrations.CreateModel(
name='Component',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50, unique=True)),
('photoUrl', models.URLField(blank=True, null=True)),
],
),
migrations.CreateModel(
name='HardDriveType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=10)),
],
),
migrations.CreateModel(
name='MotherBoardFormFactor',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=10)),
],
),
migrations.CreateModel(
name='PciType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=50)),
],
),
migrations.CreateModel(
name='PowerSupplyFormFactor',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=10)),
],
),
migrations.CreateModel(
name='RamFrequency',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('frequency', models.IntegerField()),
],
),
migrations.CreateModel(
name='RamType',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('typeName', models.CharField(max_length=10)),
],
),
migrations.CreateModel(
name='Socket',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('name', models.CharField(max_length=10)),
],
),
migrations.CreateModel(
name='Case',
fields=[
('component_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='components.Component')),
('weight', models.FloatField()),
('width', models.IntegerField()),
('height', models.IntegerField()),
('depth', models.IntegerField()),
('motherBoardFormFactors', models.ManyToManyField(to='components.MotherBoardFormFactor')),
('powerSupplyFormFactor', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.PowerSupplyFormFactor')),
],
bases=('components.component',),
),
migrations.CreateModel(
name='GraphicCard',
fields=[
('component_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='components.Component')),
('memory', models.IntegerField()),
('pcitype', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.PciType')),
],
bases=('components.component',),
),
migrations.CreateModel(
name='HardDrive',
fields=[
('component_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='components.Component')),
('capacity', models.IntegerField()),
('hardDriveType', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.HardDriveType')),
],
bases=('components.component',),
),
migrations.CreateModel(
name='Motherboard',
fields=[
('component_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='components.Component')),
('ramSlots', models.IntegerField()),
('maxRam', models.IntegerField()),
('formfactor', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.MotherBoardFormFactor')),
('pcitypes', models.ManyToManyField(to='components.PciType')),
('ramfrequency', models.ManyToManyField(to='components.RamFrequency')),
('ramtype', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.RamType')),
('socket', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.Socket')),
],
bases=('components.component',),
),
migrations.CreateModel(
name='PowerSupply',
fields=[
('component_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='components.Component')),
('watts', models.IntegerField()),
('modular', models.BooleanField()),
('factorForm', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.PowerSupplyFormFactor')),
],
bases=('components.component',),
),
migrations.CreateModel(
name='Processor',
fields=[
('component_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='components.Component')),
('frequency', models.FloatField()),
('cores', models.IntegerField()),
('socket', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.Socket')),
],
bases=('components.component',),
),
migrations.CreateModel(
name='Ram',
fields=[
('component_ptr', models.OneToOneField(auto_created=True, on_delete=django.db.models.deletion.CASCADE, parent_link=True, primary_key=True, serialize=False, to='components.Component')),
('capacity', models.IntegerField()),
('quantity', models.IntegerField()),
('frequency', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.RamFrequency')),
('ramtype', models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.RamType')),
],
bases=('components.component',),
),
migrations.AddField(
model_name='component',
name='brand',
field=models.ForeignKey(on_delete=django.db.models.deletion.CASCADE, to='components.Brand'),
),
]
|
Frank Thomas wastes no words. The Cleveland Indians, he concludes, improved themselves greatly Tuesday when they added outfielders Marquis Grisson and David Justice in a deal for center-fielder Kenny Lofton and left-handed relief pitcher Embree.
"That's a real serious move," Thomas said. "They picked up two All-Stars for one."
"They're stronger," he said of the Indians, who have won two consecutive American League Central titles. "They have more pop in the leadoff spot and with Justice they have more in the outfield."
While Sox chairman Jerry Reinsdorf and General Manager Ron Schueler called it a good deal for both teams, Thomas believes it's one-sided for the Indians.
"I think they got the better trade," Thomas said. "It makes 'em a better team. Lofton brings a lot to the table, but they're getting two superstars."
Both Schueler and Reinsdorf emphasized the Indians' loss of Lofton more than the addition of Grissom and Justice. Reinsdorf practically gushed.
"I am thrilled to have Lofton out of the American League," he said. "I don't want to evaluate their trade. I just know I'm going to like not having to face Lofton. I go back to the end of (Joe) DiMaggio's career. I saw (Willie) Mays, (Duke) Snider. They were great center-fielders. But I never saw anybody take over a game like Lofton can. . .He can save three or four runs a game (with his glove)."
Reinsdorf said he wasn't surprised to see Cleveland add about $6 million in payroll at a time when almost all teams are set.
"They're sold out for the whole year," Reinsdorf said. "They have to add seats. They have a team that's consistently picked to win, and they're going for it.. . .They're not going to lose money."
Don't expect the Sox to try to swap blockbuster trades with their AL Central rival. Schueler continues working the phones but he's looking only for a right-handed reliever and a left-handed bat off the bench.
Darwin rocked again: Although the Sox have downplayed Danny Darwin's ineffectiveness, the veteran does little to inspire belief. He gave up six runs on 10 hits in an 8-7 victory over Minnesota Tuesday night, and has allowed 18 runs and 31 hits in 13 1/3 innings overall.
"When I'm making mistakes, it's over the middle of the plate, not on the corners," Darwin said. "I guess this is what spring training is all about."
Darwin and left-hander Mike Bertotti are the two remaining candidates for the fifth starter's job. Bertotti, who worked one inning Tuesday, is being groomed for the bullpen.
Short hops: An MRI on right-hander Roger McDowell's surgically treated right shoulder revealed damage that could require another arthroscopic surgery. "If he doesn't respond to medication, we might have to go in with a scope," Schueler said.. . .The Sox purchased the contract of non-roster catcher Chad Kreuter, who is batting .133. He could be released or traded by midnight Wednesday.
|
# -*- coding: utf-8 -*-
"""
Project Euler: Problem 12
=========================
https://projecteuler.net/problem=12
Highly divisible triangular number
----------------------------------
The sequence of triangle numbers is generated by adding the natural
numbers. So the 7th triangle number would be 1 + 2 + 3 + 4 + 5 + 6 + 7 =
28. The first ten terms would be:
1, 3, 6, 10, 15, 21, 28, 36, 45, 55, ...
Let us list the factors of the first seven triangle numbers:
1: 1
3: 1,3
6: 1,2,3,6
10: 1,2,5,10
15: 1,3,5,15
21: 1,3,7,21
28: 1,2,4,7,14,28
We can see that 28 is the first triangle number to have over five
divisors.
What is the value of the first triangle number to have over five hundred
divisors?
"""
from collections import Counter
from itertools import count, islice
from projecteuler.problems.problem5 import factorize
number = 12
target = 500
answer = 76576500
def triangle_numbers():
"""Generate the triangle numbers (sums of the natural numbers).
>>> list(islice(triangle_numbers(), 10)
[1, 3, 6, 10, 15, 21, 28, 36, 45, 55]
"""
current = 0
for i in count(1):
current += i
yield current
assert (list(islice(triangle_numbers(), 10)) ==
[1, 3, 6, 10, 15, 21, 28, 36, 45, 55])
def num_divisors(n):
"""Find the number of unique divisors of n."""
c = Counter(factorize(n))
total = 1
for n in c.values():
total *= n + 1
return total
divisor_tests = {
1: [1],
3: [1, 3],
6: [1, 2, 3, 6],
10: [1, 2, 5, 10],
15: [1, 3, 5, 15],
21: [1, 3, 7, 21],
28: [1, 2, 4, 7, 14, 28],
}
for k, expected in divisor_tests.items():
assert num_divisors(k) == len(expected)
def solution():
return next(n for n in triangle_numbers() if num_divisors(n) > target)
|
The Veritek Genie In-Lift 3D Wheel Aligner System. The First in the World In-Lift 3D Wheel Alignment Design.
Made in India Wheel Alignment Machine.
World’s 1st In-lift 3D Wheel Aligner with 2 cameras & smallest targets.
World’s smallest target plates for front wheels.
No fixed beam / column in front of the lift.
No obstruction of signals due to technician walking around.
Adjustment at any convenient height, No height restrictions.
Genie 3D can be installed in 11′ X 18′ space.
No need for minimum distance between Wall and the lift.
Free space in the shop floor can be effectively utilized.
Exclusive software take care of automatic lift level compensation.
Genie is a creatively designed module that houses Hi-definition cameras and electronics to capture images of the targets on wheels.
Each Genie has 1 camera, making Genie 3D the only aligner in the world using just 2 cameras for an in-lift model.
Camera not required to be removed & refitted every time.
User-friendly Wi-Fi enabled tablet device to display alignment results and to operate PC from the Alignment bay, avoids multiple trips to the PC console.
|
from operator import itemgetter
from Bot.Plugins.Base import PluginBase
class HelpCommandPlugin(PluginBase):
"""
This plugin sends a list of all available commands to the user when he types a certain command
help - Sends a list of all available commands
optional example json ( can be inserted in config.json under plugins ):
"mydayyy_help_command": {
"commands": {
"help": {
"command": "help",
"accesslevel": 0
}
}
}
"""
def __init__(self, bot_instance):
super().__init__(bot_instance)
# init command variables
self.command_help_cmd = bot_instance.get_user_setting("mydayyy_help_command.commands.help.command") or "help"
self.command_help_al = bot_instance.get_user_setting("mydayyy_help_command.commands.help.accesslevel") or 0
self.bot_instance.add_chat_command(self.command_help_cmd,
"Subscribes the client to receive links",
self.command_help_al,
self.command_help,
[])
def command_help(self, invokerid, invokername, invokeruid, msg_splitted):
client_access_level = self.bot_instance.get_client_accesslevel(invokerid)
chat_commands = self.bot_instance.get_all_commands()
sorted_commands = []
idx = 0
for key, command in chat_commands.items():
idx += 1
color = "[COLOR=green]" if client_access_level >= command.accesslevel else "[COLOR=red]"
args = " ".join(command.args)
if args == "" or args is None:
answer = color + "" + key + " - " + command.description + " [" + str(command.accesslevel) + "][/COLOR]"
else:
answer = color + "" + key + " " + args + " - " + command.description + " [" + str(command.accesslevel) + "][/COLOR]"
sorted_commands.append([idx, answer])
sorted_commands = sorted(sorted_commands, key=itemgetter(0))
for answer in sorted_commands:
self.bot_instance.send_text_to_client(invokerid, answer[1])
|
In line with the initials of his name, viz., “LSG”, which as an acronym stands for “Learning, Sharing and Growing”, George believes in constantly challenging his own self towards any new learning and believes in sharing the same. George now, as Chairman & Prime Servant, heads 5E serpraise, a company with the vision of “enriching everyone”.
As founder President, George was instrumental in setting up the Compensation Club (Currently known as HR Association) at Bangalore wherein today 21 leading multinational companies are members and he currently serves as an Executive Committee Member.
He is also the past secretary of the National HRD Network, Bangalore Chapter and member of National Institute of Personnel Management. He is also a visiting faculty in many management institutes and resource person for many corporate training programs.
George holds a Bachelor’s degree in Mathematics (1976), B.Tech. in Aeronautical Engineering(1979) and M.Tech. in Aircraft Structures(1981).
After which he pursued a post-graduate course in Human Resources Management at XLRI, Jamshedpur (1983) – in line with his passion for people development. George is an ICF certified Coach.
My method is underscored by leadership principles designed to create custom solutions for unique problems, and my work with clients is highly collaborative and focused.
George, Founder | 5e serpraise.
During his twenty years of experience, George worked with Multinational as well as Indian companiesincluding TVS Suzuki and Titan Watches during their inception stages. George joined 3M in 1989 during the formation of the company. In addition to heading HR function in India, he was also responsible for guiding 3M Sri Lanka on their HR issue.
|
# Copyright 2014 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Create / interact with Google Cloud Storage buckets."""
import base64
import copy
import datetime
import json
import warnings
import six
from google.api_core import page_iterator
from google.api_core import datetime_helpers
from google.cloud._helpers import _datetime_to_rfc3339
from google.cloud._helpers import _NOW
from google.cloud._helpers import _rfc3339_to_datetime
from google.cloud.exceptions import NotFound
from google.cloud.iam import Policy
from google.cloud.storage import _signing
from google.cloud.storage._helpers import _PropertyMixin
from google.cloud.storage._helpers import _scalar_property
from google.cloud.storage._helpers import _validate_name
from google.cloud.storage.acl import BucketACL
from google.cloud.storage.acl import DefaultObjectACL
from google.cloud.storage.blob import Blob
from google.cloud.storage.blob import _get_encryption_headers
from google.cloud.storage.notification import BucketNotification
from google.cloud.storage.notification import NONE_PAYLOAD_FORMAT
_LOCATION_SETTER_MESSAGE = (
"Assignment to 'Bucket.location' is deprecated, as it is only "
"valid before the bucket is created. Instead, pass the location "
"to `Bucket.create`.")
def _blobs_page_start(iterator, page, response):
"""Grab prefixes after a :class:`~google.cloud.iterator.Page` started.
:type iterator: :class:`~google.api_core.page_iterator.Iterator`
:param iterator: The iterator that is currently in use.
:type page: :class:`~google.cloud.api.core.page_iterator.Page`
:param page: The page that was just created.
:type response: dict
:param response: The JSON API response for a page of blobs.
"""
page.prefixes = tuple(response.get('prefixes', ()))
iterator.prefixes.update(page.prefixes)
def _item_to_blob(iterator, item):
"""Convert a JSON blob to the native object.
.. note::
This assumes that the ``bucket`` attribute has been
added to the iterator after being created.
:type iterator: :class:`~google.api_core.page_iterator.Iterator`
:param iterator: The iterator that has retrieved the item.
:type item: dict
:param item: An item to be converted to a blob.
:rtype: :class:`.Blob`
:returns: The next blob in the page.
"""
name = item.get('name')
blob = Blob(name, bucket=iterator.bucket)
blob._set_properties(item)
return blob
def _item_to_notification(iterator, item):
"""Convert a JSON blob to the native object.
.. note::
This assumes that the ``bucket`` attribute has been
added to the iterator after being created.
:type iterator: :class:`~google.api_core.page_iterator.Iterator`
:param iterator: The iterator that has retrieved the item.
:type item: dict
:param item: An item to be converted to a blob.
:rtype: :class:`.BucketNotification`
:returns: The next notification being iterated.
"""
return BucketNotification.from_api_repr(item, bucket=iterator.bucket)
class LifecycleRuleConditions(dict):
"""Map a single lifecycle rule for a bucket.
See: https://cloud.google.com/storage/docs/lifecycle
:type age: int
:param age: (optional) apply rule action to items whos age, in days,
exceeds this value.
:type created_before: datetime.date
:param created_before: (optional) apply rule action to items created
before this date.
:type is_live: bool
:param is_live: (optional) if true, apply rule action to non-versioned
items, or to items with no newer versions. If false, apply
rule action to versioned items with at least one newer
version.
:type matches_storage_class: list(str), one or more of
:attr:`Bucket._STORAGE_CLASSES`.
:param matches_storage_class: (optional) apply rule action to items which
whose storage class matches this value.
:type number_of_newer_versions: int
:param number_of_newer_versions: (optional) apply rule action to versioned
items having N newer versions.
:raises ValueError: if no arguments are passed.
"""
def __init__(self, age=None, created_before=None, is_live=None,
matches_storage_class=None, number_of_newer_versions=None,
_factory=False):
conditions = {}
if age is not None:
conditions['age'] = age
if created_before is not None:
conditions['createdBefore'] = created_before.isoformat()
if is_live is not None:
conditions['isLive'] = is_live
if matches_storage_class is not None:
conditions['matchesStorageClass'] = matches_storage_class
if number_of_newer_versions is not None:
conditions['numNewerVersions'] = number_of_newer_versions
if not _factory and not conditions:
raise ValueError("Supply at least one condition")
super(LifecycleRuleConditions, self).__init__(conditions)
@classmethod
def from_api_repr(cls, resource):
"""Factory: construct instance from resource.
:type resource: dict
:param resource: mapping as returned from API call.
:rtype: :class:`LifecycleRuleConditions`
:returns: Instance created from resource.
"""
instance = cls(_factory=True)
instance.update(resource)
return instance
@property
def age(self):
"""Conditon's age value."""
return self.get('age')
@property
def created_before(self):
"""Conditon's created_before value."""
before = self.get('createdBefore')
if before is not None:
return datetime_helpers.from_iso8601_date(before)
@property
def is_live(self):
"""Conditon's 'is_live' value."""
return self.get('isLive')
@property
def matches_storage_class(self):
"""Conditon's 'matches_storage_class' value."""
return self.get('matchesStorageClass')
@property
def number_of_newer_versions(self):
"""Conditon's 'number_of_newer_versions' value."""
return self.get('numNewerVersions')
class LifecycleRuleDelete(dict):
"""Map a lifecycle rule deleting matching items.
:type kw: dict
:params kw: arguments passed to :class:`LifecycleRuleConditions`.
"""
def __init__(self, **kw):
conditions = LifecycleRuleConditions(**kw)
rule = {
'action': {
'type': 'Delete',
},
'condition': dict(conditions),
}
super(LifecycleRuleDelete, self).__init__(rule)
@classmethod
def from_api_repr(cls, resource):
"""Factory: construct instance from resource.
:type resource: dict
:param resource: mapping as returned from API call.
:rtype: :class:`LifecycleRuleDelete`
:returns: Instance created from resource.
"""
instance = cls(_factory=True)
instance.update(resource)
return instance
class LifecycleRuleSetStorageClass(dict):
"""Map a lifecycle rule upating storage class of matching items.
:type storage_class: str, one of :attr:`Bucket._STORAGE_CLASSES`.
:param storage_class: new storage class to assign to matching items.
:type kw: dict
:params kw: arguments passed to :class:`LifecycleRuleConditions`.
"""
def __init__(self, storage_class, **kw):
conditions = LifecycleRuleConditions(**kw)
rule = {
'action': {
'type': 'SetStorageClass',
'storageClass': storage_class,
},
'condition': dict(conditions),
}
super(LifecycleRuleSetStorageClass, self).__init__(rule)
@classmethod
def from_api_repr(cls, resource):
"""Factory: construct instance from resource.
:type resource: dict
:param resource: mapping as returned from API call.
:rtype: :class:`LifecycleRuleDelete`
:returns: Instance created from resource.
"""
action = resource['action']
instance = cls(action['storageClass'], _factory=True)
instance.update(resource)
return instance
class Bucket(_PropertyMixin):
"""A class representing a Bucket on Cloud Storage.
:type client: :class:`google.cloud.storage.client.Client`
:param client: A client which holds credentials and project configuration
for the bucket (which requires a project).
:type name: str
:param name: The name of the bucket. Bucket names must start and end with a
number or letter.
:type user_project: str
:param user_project: (Optional) the project ID to be billed for API
requests made via this instance.
"""
_MAX_OBJECTS_FOR_ITERATION = 256
"""Maximum number of existing objects allowed in iteration.
This is used in Bucket.delete() and Bucket.make_public().
"""
_STORAGE_CLASSES = (
'MULTI_REGIONAL',
'REGIONAL',
'NEARLINE',
'COLDLINE',
'STANDARD', # alias for MULTI_REGIONAL/REGIONAL, based on location
'DURABLE_REDUCED_AVAILABILITY', # deprecated
)
"""Allowed values for :attr:`storage_class`.
See
https://cloud.google.com/storage/docs/json_api/v1/buckets#storageClass
https://cloud.google.com/storage/docs/storage-classes
"""
def __init__(self, client, name=None, user_project=None):
name = _validate_name(name)
super(Bucket, self).__init__(name=name)
self._client = client
self._acl = BucketACL(self)
self._default_object_acl = DefaultObjectACL(self)
self._label_removals = set()
self._user_project = user_project
def __repr__(self):
return '<Bucket: %s>' % (self.name,)
@property
def client(self):
"""The client bound to this bucket."""
return self._client
def _set_properties(self, value):
"""Set the properties for the current object.
:type value: dict or :class:`google.cloud.storage.batch._FutureDict`
:param value: The properties to be set.
"""
self._label_removals.clear()
return super(Bucket, self)._set_properties(value)
@property
def user_project(self):
"""Project ID to be billed for API requests made via this bucket.
If unset, API requests are billed to the bucket owner.
:rtype: str
"""
return self._user_project
def blob(self, blob_name, chunk_size=None,
encryption_key=None, kms_key_name=None):
"""Factory constructor for blob object.
.. note::
This will not make an HTTP request; it simply instantiates
a blob object owned by this bucket.
:type blob_name: str
:param blob_name: The name of the blob to be instantiated.
:type chunk_size: int
:param chunk_size: The size of a chunk of data whenever iterating
(in bytes). This must be a multiple of 256 KB per
the API specification.
:type encryption_key: bytes
:param encryption_key:
Optional 32 byte encryption key for customer-supplied encryption.
:type kms_key_name: str
:param kms_key_name:
Optional resource name of KMS key used to encrypt blob's content.
:rtype: :class:`google.cloud.storage.blob.Blob`
:returns: The blob object created.
"""
return Blob(name=blob_name, bucket=self, chunk_size=chunk_size,
encryption_key=encryption_key, kms_key_name=kms_key_name)
def notification(self, topic_name,
topic_project=None,
custom_attributes=None,
event_types=None,
blob_name_prefix=None,
payload_format=NONE_PAYLOAD_FORMAT):
"""Factory: create a notification resource for the bucket.
See: :class:`.BucketNotification` for parameters.
:rtype: :class:`.BucketNotification`
"""
return BucketNotification(
self, topic_name,
topic_project=topic_project,
custom_attributes=custom_attributes,
event_types=event_types,
blob_name_prefix=blob_name_prefix,
payload_format=payload_format,
)
def exists(self, client=None):
"""Determines whether or not this bucket exists.
If :attr:`user_project` is set, bills the API request to that project.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:rtype: bool
:returns: True if the bucket exists in Cloud Storage.
"""
client = self._require_client(client)
# We only need the status code (200 or not) so we seek to
# minimize the returned payload.
query_params = {'fields': 'name'}
if self.user_project is not None:
query_params['userProject'] = self.user_project
try:
# We intentionally pass `_target_object=None` since fields=name
# would limit the local properties.
client._connection.api_request(
method='GET', path=self.path,
query_params=query_params, _target_object=None)
# NOTE: This will not fail immediately in a batch. However, when
# Batch.finish() is called, the resulting `NotFound` will be
# raised.
return True
except NotFound:
return False
def create(self, client=None, project=None, location=None):
"""Creates current bucket.
If the bucket already exists, will raise
:class:`google.cloud.exceptions.Conflict`.
This implements "storage.buckets.insert".
If :attr:`user_project` is set, bills the API request to that project.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:type project: str
:param project: Optional. The project under which the bucket is to
be created. If not passed, uses the project set on
the client.
:raises ValueError: if :attr:`user_project` is set.
:raises ValueError: if ``project`` is None and client's
:attr:`project` is also None.
:type location: str
:param location: Optional. The location of the bucket. If not passed,
the default location, US, will be used. See
https://cloud.google.com/storage/docs/bucket-locations
"""
if self.user_project is not None:
raise ValueError("Cannot create bucket with 'user_project' set.")
client = self._require_client(client)
if project is None:
project = client.project
if project is None:
raise ValueError(
"Client project not set: pass an explicit project.")
query_params = {'project': project}
properties = {key: self._properties[key] for key in self._changes}
properties['name'] = self.name
if location is not None:
properties['location'] = location
api_response = client._connection.api_request(
method='POST', path='/b', query_params=query_params,
data=properties, _target_object=self)
self._set_properties(api_response)
def patch(self, client=None):
"""Sends all changed properties in a PATCH request.
Updates the ``_properties`` with the response from the backend.
If :attr:`user_project` is set, bills the API request to that project.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: the client to use. If not passed, falls back to the
``client`` stored on the current object.
"""
# Special case: For buckets, it is possible that labels are being
# removed; this requires special handling.
if self._label_removals:
self._changes.add('labels')
self._properties.setdefault('labels', {})
for removed_label in self._label_removals:
self._properties['labels'][removed_label] = None
# Call the superclass method.
return super(Bucket, self).patch(client=client)
@property
def acl(self):
"""Create our ACL on demand."""
return self._acl
@property
def default_object_acl(self):
"""Create our defaultObjectACL on demand."""
return self._default_object_acl
@staticmethod
def path_helper(bucket_name):
"""Relative URL path for a bucket.
:type bucket_name: str
:param bucket_name: The bucket name in the path.
:rtype: str
:returns: The relative URL path for ``bucket_name``.
"""
return '/b/' + bucket_name
@property
def path(self):
"""The URL path to this bucket."""
if not self.name:
raise ValueError('Cannot determine path without bucket name.')
return self.path_helper(self.name)
def get_blob(self, blob_name, client=None, encryption_key=None, **kwargs):
"""Get a blob object by name.
This will return None if the blob doesn't exist:
.. literalinclude:: snippets.py
:start-after: [START get_blob]
:end-before: [END get_blob]
If :attr:`user_project` is set, bills the API request to that project.
:type blob_name: str
:param blob_name: The name of the blob to retrieve.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:type encryption_key: bytes
:param encryption_key:
Optional 32 byte encryption key for customer-supplied encryption.
See
https://cloud.google.com/storage/docs/encryption#customer-supplied.
:type kwargs: dict
:param kwargs: Keyword arguments to pass to the
:class:`~google.cloud.storage.blob.Blob` constructor.
:rtype: :class:`google.cloud.storage.blob.Blob` or None
:returns: The blob object if it exists, otherwise None.
"""
client = self._require_client(client)
query_params = {}
if self.user_project is not None:
query_params['userProject'] = self.user_project
blob = Blob(bucket=self, name=blob_name, encryption_key=encryption_key,
**kwargs)
try:
headers = _get_encryption_headers(encryption_key)
response = client._connection.api_request(
method='GET',
path=blob.path,
query_params=query_params,
headers=headers,
_target_object=blob,
)
# NOTE: We assume response.get('name') matches `blob_name`.
blob._set_properties(response)
# NOTE: This will not fail immediately in a batch. However, when
# Batch.finish() is called, the resulting `NotFound` will be
# raised.
return blob
except NotFound:
return None
def list_blobs(self, max_results=None, page_token=None, prefix=None,
delimiter=None, versions=None,
projection='noAcl', fields=None, client=None):
"""Return an iterator used to find blobs in the bucket.
If :attr:`user_project` is set, bills the API request to that project.
:type max_results: int
:param max_results: (Optional) Maximum number of blobs to return.
:type page_token: str
:param page_token: (Optional) Opaque marker for the next "page" of
blobs. If not passed, will return the first page
of blobs.
:type prefix: str
:param prefix: (Optional) prefix used to filter blobs.
:type delimiter: str
:param delimiter: (Optional) Delimiter, used with ``prefix`` to
emulate hierarchy.
:type versions: bool
:param versions: (Optional) Whether object versions should be returned
as separate blobs.
:type projection: str
:param projection: (Optional) If used, must be 'full' or 'noAcl'.
Defaults to ``'noAcl'``. Specifies the set of
properties to return.
:type fields: str
:param fields: (Optional) Selector specifying which fields to include
in a partial response. Must be a list of fields. For
example to get a partial response with just the next
page token and the language of each blob returned:
``'items/contentLanguage,nextPageToken'``.
:type client: :class:`~google.cloud.storage.client.Client`
:param client: (Optional) The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:rtype: :class:`~google.api_core.page_iterator.Iterator`
:returns: Iterator of all :class:`~google.cloud.storage.blob.Blob`
in this bucket matching the arguments.
"""
extra_params = {'projection': projection}
if prefix is not None:
extra_params['prefix'] = prefix
if delimiter is not None:
extra_params['delimiter'] = delimiter
if versions is not None:
extra_params['versions'] = versions
if fields is not None:
extra_params['fields'] = fields
if self.user_project is not None:
extra_params['userProject'] = self.user_project
client = self._require_client(client)
path = self.path + '/o'
iterator = page_iterator.HTTPIterator(
client=client,
api_request=client._connection.api_request,
path=path,
item_to_value=_item_to_blob,
page_token=page_token,
max_results=max_results,
extra_params=extra_params,
page_start=_blobs_page_start)
iterator.bucket = self
iterator.prefixes = set()
return iterator
def list_notifications(self, client=None):
"""List Pub / Sub notifications for this bucket.
See:
https://cloud.google.com/storage/docs/json_api/v1/notifications/list
If :attr:`user_project` is set, bills the API request to that project.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:rtype: list of :class:`.BucketNotification`
:returns: notification instances
"""
client = self._require_client(client)
path = self.path + '/notificationConfigs'
iterator = page_iterator.HTTPIterator(
client=client,
api_request=client._connection.api_request,
path=path,
item_to_value=_item_to_notification)
iterator.bucket = self
return iterator
def delete(self, force=False, client=None):
"""Delete this bucket.
The bucket **must** be empty in order to submit a delete request. If
``force=True`` is passed, this will first attempt to delete all the
objects / blobs in the bucket (i.e. try to empty the bucket).
If the bucket doesn't exist, this will raise
:class:`google.cloud.exceptions.NotFound`. If the bucket is not empty
(and ``force=False``), will raise
:class:`google.cloud.exceptions.Conflict`.
If ``force=True`` and the bucket contains more than 256 objects / blobs
this will cowardly refuse to delete the objects (or the bucket). This
is to prevent accidental bucket deletion and to prevent extremely long
runtime of this method.
If :attr:`user_project` is set, bills the API request to that project.
:type force: bool
:param force: If True, empties the bucket's objects then deletes it.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:raises: :class:`ValueError` if ``force`` is ``True`` and the bucket
contains more than 256 objects / blobs.
"""
client = self._require_client(client)
query_params = {}
if self.user_project is not None:
query_params['userProject'] = self.user_project
if force:
blobs = list(self.list_blobs(
max_results=self._MAX_OBJECTS_FOR_ITERATION + 1,
client=client))
if len(blobs) > self._MAX_OBJECTS_FOR_ITERATION:
message = (
'Refusing to delete bucket with more than '
'%d objects. If you actually want to delete '
'this bucket, please delete the objects '
'yourself before calling Bucket.delete().'
) % (self._MAX_OBJECTS_FOR_ITERATION,)
raise ValueError(message)
# Ignore 404 errors on delete.
self.delete_blobs(blobs, on_error=lambda blob: None,
client=client)
# We intentionally pass `_target_object=None` since a DELETE
# request has no response value (whether in a standard request or
# in a batch request).
client._connection.api_request(
method='DELETE',
path=self.path,
query_params=query_params,
_target_object=None)
def delete_blob(self, blob_name, client=None):
"""Deletes a blob from the current bucket.
If the blob isn't found (backend 404), raises a
:class:`google.cloud.exceptions.NotFound`.
For example:
.. literalinclude:: snippets.py
:start-after: [START delete_blob]
:end-before: [END delete_blob]
If :attr:`user_project` is set, bills the API request to that project.
:type blob_name: str
:param blob_name: A blob name to delete.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:raises: :class:`google.cloud.exceptions.NotFound` (to suppress
the exception, call ``delete_blobs``, passing a no-op
``on_error`` callback, e.g.:
.. literalinclude:: snippets.py
:start-after: [START delete_blobs]
:end-before: [END delete_blobs]
"""
client = self._require_client(client)
query_params = {}
if self.user_project is not None:
query_params['userProject'] = self.user_project
blob_path = Blob.path_helper(self.path, blob_name)
# We intentionally pass `_target_object=None` since a DELETE
# request has no response value (whether in a standard request or
# in a batch request).
client._connection.api_request(
method='DELETE',
path=blob_path,
query_params=query_params,
_target_object=None)
def delete_blobs(self, blobs, on_error=None, client=None):
"""Deletes a list of blobs from the current bucket.
Uses :meth:`delete_blob` to delete each individual blob.
If :attr:`user_project` is set, bills the API request to that project.
:type blobs: list
:param blobs: A list of :class:`~google.cloud.storage.blob.Blob`-s or
blob names to delete.
:type on_error: callable
:param on_error: (Optional) Takes single argument: ``blob``. Called
called once for each blob raising
:class:`~google.cloud.exceptions.NotFound`;
otherwise, the exception is propagated.
:type client: :class:`~google.cloud.storage.client.Client`
:param client: (Optional) The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:raises: :class:`~google.cloud.exceptions.NotFound` (if
`on_error` is not passed).
"""
for blob in blobs:
try:
blob_name = blob
if not isinstance(blob_name, six.string_types):
blob_name = blob.name
self.delete_blob(blob_name, client=client)
except NotFound:
if on_error is not None:
on_error(blob)
else:
raise
def copy_blob(self, blob, destination_bucket, new_name=None,
client=None, preserve_acl=True, source_generation=None):
"""Copy the given blob to the given bucket, optionally with a new name.
If :attr:`user_project` is set, bills the API request to that project.
:type blob: :class:`google.cloud.storage.blob.Blob`
:param blob: The blob to be copied.
:type destination_bucket: :class:`google.cloud.storage.bucket.Bucket`
:param destination_bucket: The bucket into which the blob should be
copied.
:type new_name: str
:param new_name: (optional) the new name for the copied file.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:type preserve_acl: bool
:param preserve_acl: Optional. Copies ACL from old blob to new blob.
Default: True.
:type source_generation: long
:param source_generation: Optional. The generation of the blob to be
copied.
:rtype: :class:`google.cloud.storage.blob.Blob`
:returns: The new Blob.
"""
client = self._require_client(client)
query_params = {}
if self.user_project is not None:
query_params['userProject'] = self.user_project
if source_generation is not None:
query_params['sourceGeneration'] = source_generation
if new_name is None:
new_name = blob.name
new_blob = Blob(bucket=destination_bucket, name=new_name)
api_path = blob.path + '/copyTo' + new_blob.path
copy_result = client._connection.api_request(
method='POST',
path=api_path,
query_params=query_params,
_target_object=new_blob,
)
if not preserve_acl:
new_blob.acl.save(acl={}, client=client)
new_blob._set_properties(copy_result)
return new_blob
def rename_blob(self, blob, new_name, client=None):
"""Rename the given blob using copy and delete operations.
If :attr:`user_project` is set, bills the API request to that project.
Effectively, copies blob to the same bucket with a new name, then
deletes the blob.
.. warning::
This method will first duplicate the data and then delete the
old blob. This means that with very large objects renaming
could be a very (temporarily) costly or a very slow operation.
:type blob: :class:`google.cloud.storage.blob.Blob`
:param blob: The blob to be renamed.
:type new_name: str
:param new_name: The new name for this blob.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:rtype: :class:`Blob`
:returns: The newly-renamed blob.
"""
same_name = blob.name == new_name
new_blob = self.copy_blob(blob, self, new_name, client=client)
if not same_name:
blob.delete(client=client)
return new_blob
@property
def cors(self):
"""Retrieve or set CORS policies configured for this bucket.
See http://www.w3.org/TR/cors/ and
https://cloud.google.com/storage/docs/json_api/v1/buckets
.. note::
The getter for this property returns a list which contains
*copies* of the bucket's CORS policy mappings. Mutating the list
or one of its dicts has no effect unless you then re-assign the
dict via the setter. E.g.:
>>> policies = bucket.cors
>>> policies.append({'origin': '/foo', ...})
>>> policies[1]['maxAgeSeconds'] = 3600
>>> del policies[0]
>>> bucket.cors = policies
>>> bucket.update()
:setter: Set CORS policies for this bucket.
:getter: Gets the CORS policies for this bucket.
:rtype: list of dictionaries
:returns: A sequence of mappings describing each CORS policy.
"""
return [copy.deepcopy(policy)
for policy in self._properties.get('cors', ())]
@cors.setter
def cors(self, entries):
"""Set CORS policies configured for this bucket.
See http://www.w3.org/TR/cors/ and
https://cloud.google.com/storage/docs/json_api/v1/buckets
:type entries: list of dictionaries
:param entries: A sequence of mappings describing each CORS policy.
"""
self._patch_property('cors', entries)
default_event_based_hold = _scalar_property('defaultEventBasedHold')
"""Are uploaded objects automatically placed under an even-based hold?
If True, uploaded objects will be placed under an event-based hold to
be released at a future time. When released an object will then begin
the retention period determined by the policy retention period for the
object bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
If the property is not set locally, returns ``None``.
:rtype: bool or ``NoneType``
"""
@property
def default_kms_key_name(self):
"""Retrieve / set default KMS encryption key for objects in the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
:setter: Set default KMS encryption key for items in this bucket.
:getter: Get default KMS encryption key for items in this bucket.
:rtype: str
:returns: Default KMS encryption key, or ``None`` if not set.
"""
encryption_config = self._properties.get('encryption', {})
return encryption_config.get('defaultKmsKeyName')
@default_kms_key_name.setter
def default_kms_key_name(self, value):
"""Set default KMS encryption key for objects in the bucket.
:type value: str or None
:param value: new KMS key name (None to clear any existing key).
"""
encryption_config = self._properties.get('encryption', {})
encryption_config['defaultKmsKeyName'] = value
self._patch_property('encryption', encryption_config)
@property
def labels(self):
"""Retrieve or set labels assigned to this bucket.
See
https://cloud.google.com/storage/docs/json_api/v1/buckets#labels
.. note::
The getter for this property returns a dict which is a *copy*
of the bucket's labels. Mutating that dict has no effect unless
you then re-assign the dict via the setter. E.g.:
>>> labels = bucket.labels
>>> labels['new_key'] = 'some-label'
>>> del labels['old_key']
>>> bucket.labels = labels
>>> bucket.update()
:setter: Set labels for this bucket.
:getter: Gets the labels for this bucket.
:rtype: :class:`dict`
:returns: Name-value pairs (string->string) labelling the bucket.
"""
labels = self._properties.get('labels')
if labels is None:
return {}
return copy.deepcopy(labels)
@labels.setter
def labels(self, mapping):
"""Set labels assigned to this bucket.
See
https://cloud.google.com/storage/docs/json_api/v1/buckets#labels
:type mapping: :class:`dict`
:param mapping: Name-value pairs (string->string) labelling the bucket.
"""
# If any labels have been expressly removed, we need to track this
# so that a future .patch() call can do the correct thing.
existing = set([k for k in self.labels.keys()])
incoming = set([k for k in mapping.keys()])
self._label_removals = self._label_removals.union(
existing.difference(incoming),
)
# Actually update the labels on the object.
self._patch_property('labels', copy.deepcopy(mapping))
@property
def etag(self):
"""Retrieve the ETag for the bucket.
See https://tools.ietf.org/html/rfc2616#section-3.11 and
https://cloud.google.com/storage/docs/json_api/v1/buckets
:rtype: str or ``NoneType``
:returns: The bucket etag or ``None`` if the bucket's
resource has not been loaded from the server.
"""
return self._properties.get('etag')
@property
def id(self):
"""Retrieve the ID for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
:rtype: str or ``NoneType``
:returns: The ID of the bucket or ``None`` if the bucket's
resource has not been loaded from the server.
"""
return self._properties.get('id')
@property
def lifecycle_rules(self):
"""Retrieve or set lifecycle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
https://cloud.google.com/storage/docs/json_api/v1/buckets
.. note::
The getter for this property returns a list which contains
*copies* of the bucket's lifecycle rules mappings. Mutating the
list or one of its dicts has no effect unless you then re-assign
the dict via the setter. E.g.:
>>> rules = bucket.lifecycle_rules
>>> rules.append({'origin': '/foo', ...})
>>> rules[1]['rule']['action']['type'] = 'Delete'
>>> del rules[0]
>>> bucket.lifecycle_rules = rules
>>> bucket.update()
:setter: Set lifestyle rules for this bucket.
:getter: Gets the lifestyle rules for this bucket.
:rtype: generator(dict)
:returns: A sequence of mappings describing each lifecycle rule.
"""
info = self._properties.get('lifecycle', {})
for rule in info.get('rule', ()):
action_type = rule['action']['type']
if action_type == 'Delete':
yield LifecycleRuleDelete.from_api_repr(rule)
elif action_type == 'SetStorageClass':
yield LifecycleRuleSetStorageClass.from_api_repr(rule)
else:
raise ValueError("Unknown lifecycle rule: {}".format(rule))
@lifecycle_rules.setter
def lifecycle_rules(self, rules):
"""Set lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
https://cloud.google.com/storage/docs/json_api/v1/buckets
:type entries: list of dictionaries
:param entries: A sequence of mappings describing each lifecycle rule.
"""
rules = [dict(rule) for rule in rules] # Convert helpers if needed
self._patch_property('lifecycle', {'rule': rules})
def clear_lifecyle_rules(self):
"""Set lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
https://cloud.google.com/storage/docs/json_api/v1/buckets
"""
self.lifecycle_rules = []
def add_lifecycle_delete_rule(self, **kw):
"""Add a "delete" rule to lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
https://cloud.google.com/storage/docs/json_api/v1/buckets
.. literalinclude:: snippets.py
:start-after: [START add_lifecycle_delete_rule]
:end-before: [END add_lifecycle_delete_rule]
:type kw: dict
:params kw: arguments passed to :class:`LifecycleRuleConditions`.
"""
rules = list(self.lifecycle_rules)
rules.append(LifecycleRuleDelete(**kw))
self.lifecycle_rules = rules
def add_lifecycle_set_storage_class_rule(self, storage_class, **kw):
"""Add a "delete" rule to lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
https://cloud.google.com/storage/docs/json_api/v1/buckets
.. literalinclude:: snippets.py
:start-after: [START add_lifecycle_set_storage_class_rule]
:end-before: [END add_lifecycle_set_storage_class_rule]
:type storage_class: str, one of :attr:`_STORAGE_CLASSES`.
:param storage_class: new storage class to assign to matching items.
:type kw: dict
:params kw: arguments passed to :class:`LifecycleRuleConditions`.
"""
rules = list(self.lifecycle_rules)
rules.append(LifecycleRuleSetStorageClass(storage_class, **kw))
self.lifecycle_rules = rules
_location = _scalar_property('location')
@property
def location(self):
"""Retrieve location configured for this bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets and
https://cloud.google.com/storage/docs/bucket-locations
Returns ``None`` if the property has not been set before creation,
or if the bucket's resource has not been loaded from the server.
:rtype: str or ``NoneType``
"""
return self._location
@location.setter
def location(self, value):
"""(Deprecated) Set `Bucket.location`
This can only be set at bucket **creation** time.
See https://cloud.google.com/storage/docs/json_api/v1/buckets and
https://cloud.google.com/storage/docs/bucket-locations
.. warning::
Assignment to 'Bucket.location' is deprecated, as it is only
valid before the bucket is created. Instead, pass the location
to `Bucket.create`.
"""
warnings.warn(
_LOCATION_SETTER_MESSAGE, DeprecationWarning, stacklevel=2)
self._location = value
def get_logging(self):
"""Return info about access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#status
:rtype: dict or None
:returns: a dict w/ keys, ``logBucket`` and ``logObjectPrefix``
(if logging is enabled), or None (if not).
"""
info = self._properties.get('logging')
return copy.deepcopy(info)
def enable_logging(self, bucket_name, object_prefix=''):
"""Enable access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs
:type bucket_name: str
:param bucket_name: name of bucket in which to store access logs
:type object_prefix: str
:param object_prefix: prefix for access log filenames
"""
info = {'logBucket': bucket_name, 'logObjectPrefix': object_prefix}
self._patch_property('logging', info)
def disable_logging(self):
"""Disable access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#disabling
"""
self._patch_property('logging', None)
@property
def metageneration(self):
"""Retrieve the metageneration for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
:rtype: int or ``NoneType``
:returns: The metageneration of the bucket or ``None`` if the bucket's
resource has not been loaded from the server.
"""
metageneration = self._properties.get('metageneration')
if metageneration is not None:
return int(metageneration)
@property
def owner(self):
"""Retrieve info about the owner of the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
:rtype: dict or ``NoneType``
:returns: Mapping of owner's role/ID. Returns ``None`` if the bucket's
resource has not been loaded from the server.
"""
return copy.deepcopy(self._properties.get('owner'))
@property
def project_number(self):
"""Retrieve the number of the project to which the bucket is assigned.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
:rtype: int or ``NoneType``
:returns: The project number that owns the bucket or ``None`` if
the bucket's resource has not been loaded from the server.
"""
project_number = self._properties.get('projectNumber')
if project_number is not None:
return int(project_number)
@property
def retention_policy_effective_time(self):
"""Retrieve the effective time of the bucket's retention policy.
:rtype: datetime.datetime or ``NoneType``
:returns: point-in time at which the bucket's retention policy is
effective, or ``None`` if the property is not
set locally.
"""
policy = self._properties.get('retentionPolicy')
if policy is not None:
timestamp = policy.get('effectiveTime')
if timestamp is not None:
return _rfc3339_to_datetime(timestamp)
@property
def retention_policy_locked(self):
"""Retrieve whthere the bucket's retention policy is locked.
:rtype: bool
:returns: True if the bucket's policy is locked, or else False
if the policy is not locked, or the property is not
set locally.
"""
policy = self._properties.get('retentionPolicy')
if policy is not None:
return policy.get('isLocked')
@property
def retention_period(self):
"""Retrieve or set the retention period for items in the bucket.
:rtype: int or ``NoneType``
:returns: number of seconds to retain items after upload or release
from event-based lock, or ``None`` if the property is not
set locally.
"""
policy = self._properties.get('retentionPolicy')
if policy is not None:
period = policy.get('retentionPeriod')
if period is not None:
return int(period)
@retention_period.setter
def retention_period(self, value):
"""Set the retention period for items in the bucket.
:type value: int
:param value:
number of seconds to retain items after upload or release from
event-based lock.
:raises ValueError: if the bucket's retention policy is locked.
"""
policy = self._properties.setdefault('retentionPolicy', {})
if value is not None:
policy['retentionPeriod'] = str(value)
else:
policy = None
self._patch_property('retentionPolicy', policy)
@property
def self_link(self):
"""Retrieve the URI for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
:rtype: str or ``NoneType``
:returns: The self link for the bucket or ``None`` if
the bucket's resource has not been loaded from the server.
"""
return self._properties.get('selfLink')
@property
def storage_class(self):
"""Retrieve or set the storage class for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
:setter: Set the storage class for this bucket.
:getter: Gets the the storage class for this bucket.
:rtype: str or ``NoneType``
:returns: If set, one of "MULTI_REGIONAL", "REGIONAL",
"NEARLINE", "COLDLINE", "STANDARD", or
"DURABLE_REDUCED_AVAILABILITY", else ``None``.
"""
return self._properties.get('storageClass')
@storage_class.setter
def storage_class(self, value):
"""Set the storage class for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
:type value: str
:param value: one of "MULTI_REGIONAL", "REGIONAL", "NEARLINE",
"COLDLINE", "STANDARD", or "DURABLE_REDUCED_AVAILABILITY"
"""
if value not in self._STORAGE_CLASSES:
raise ValueError('Invalid storage class: %s' % (value,))
self._patch_property('storageClass', value)
@property
def time_created(self):
"""Retrieve the timestamp at which the bucket was created.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
:rtype: :class:`datetime.datetime` or ``NoneType``
:returns: Datetime object parsed from RFC3339 valid timestamp, or
``None`` if the bucket's resource has not been loaded
from the server.
"""
value = self._properties.get('timeCreated')
if value is not None:
return _rfc3339_to_datetime(value)
@property
def versioning_enabled(self):
"""Is versioning enabled for this bucket?
See https://cloud.google.com/storage/docs/object-versioning for
details.
:setter: Update whether versioning is enabled for this bucket.
:getter: Query whether versioning is enabled for this bucket.
:rtype: bool
:returns: True if enabled, else False.
"""
versioning = self._properties.get('versioning', {})
return versioning.get('enabled', False)
@versioning_enabled.setter
def versioning_enabled(self, value):
"""Enable versioning for this bucket.
See https://cloud.google.com/storage/docs/object-versioning for
details.
:type value: convertible to boolean
:param value: should versioning be enabled for the bucket?
"""
self._patch_property('versioning', {'enabled': bool(value)})
@property
def requester_pays(self):
"""Does the requester pay for API requests for this bucket?
See https://cloud.google.com/storage/docs/requester-pays for
details.
:setter: Update whether requester pays for this bucket.
:getter: Query whether requester pays for this bucket.
:rtype: bool
:returns: True if requester pays for API requests for the bucket,
else False.
"""
versioning = self._properties.get('billing', {})
return versioning.get('requesterPays', False)
@requester_pays.setter
def requester_pays(self, value):
"""Update whether requester pays for API requests for this bucket.
See https://cloud.google.com/storage/docs/<DOCS-MISSING> for
details.
:type value: convertible to boolean
:param value: should requester pay for API requests for the bucket?
"""
self._patch_property('billing', {'requesterPays': bool(value)})
def configure_website(self, main_page_suffix=None, not_found_page=None):
"""Configure website-related properties.
See https://cloud.google.com/storage/docs/hosting-static-website
.. note::
This (apparently) only works
if your bucket name is a domain name
(and to do that, you need to get approved somehow...).
If you want this bucket to host a website, just provide the name
of an index page and a page to use when a blob isn't found:
.. literalinclude:: snippets.py
:start-after: [START configure_website]
:end-before: [END configure_website]
You probably should also make the whole bucket public:
.. literalinclude:: snippets.py
:start-after: [START make_public]
:end-before: [END make_public]
This says: "Make the bucket public, and all the stuff already in
the bucket, and anything else I add to the bucket. Just make it
all public."
:type main_page_suffix: str
:param main_page_suffix: The page to use as the main page
of a directory.
Typically something like index.html.
:type not_found_page: str
:param not_found_page: The file to use when a page isn't found.
"""
data = {
'mainPageSuffix': main_page_suffix,
'notFoundPage': not_found_page,
}
self._patch_property('website', data)
def disable_website(self):
"""Disable the website configuration for this bucket.
This is really just a shortcut for setting the website-related
attributes to ``None``.
"""
return self.configure_website(None, None)
def get_iam_policy(self, client=None):
"""Retrieve the IAM policy for the bucket.
See
https://cloud.google.com/storage/docs/json_api/v1/buckets/getIamPolicy
If :attr:`user_project` is set, bills the API request to that project.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:rtype: :class:`google.cloud.iam.Policy`
:returns: the policy instance, based on the resource returned from
the ``getIamPolicy`` API request.
"""
client = self._require_client(client)
query_params = {}
if self.user_project is not None:
query_params['userProject'] = self.user_project
info = client._connection.api_request(
method='GET',
path='%s/iam' % (self.path,),
query_params=query_params,
_target_object=None)
return Policy.from_api_repr(info)
def set_iam_policy(self, policy, client=None):
"""Update the IAM policy for the bucket.
See
https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy
If :attr:`user_project` is set, bills the API request to that project.
:type policy: :class:`google.cloud.iam.Policy`
:param policy: policy instance used to update bucket's IAM policy.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:rtype: :class:`google.cloud.iam.Policy`
:returns: the policy instance, based on the resource returned from
the ``setIamPolicy`` API request.
"""
client = self._require_client(client)
query_params = {}
if self.user_project is not None:
query_params['userProject'] = self.user_project
resource = policy.to_api_repr()
resource['resourceId'] = self.path
info = client._connection.api_request(
method='PUT',
path='%s/iam' % (self.path,),
query_params=query_params,
data=resource,
_target_object=None)
return Policy.from_api_repr(info)
def test_iam_permissions(self, permissions, client=None):
"""API call: test permissions
See
https://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions
If :attr:`user_project` is set, bills the API request to that project.
:type permissions: list of string
:param permissions: the permissions to check
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:rtype: list of string
:returns: the permissions returned by the ``testIamPermissions`` API
request.
"""
client = self._require_client(client)
query_params = {'permissions': permissions}
if self.user_project is not None:
query_params['userProject'] = self.user_project
path = '%s/iam/testPermissions' % (self.path,)
resp = client._connection.api_request(
method='GET',
path=path,
query_params=query_params)
return resp.get('permissions', [])
def make_public(self, recursive=False, future=False, client=None):
"""Update bucket's ACL, granting read access to anonymous users.
:type recursive: bool
:param recursive: If True, this will make all blobs inside the bucket
public as well.
:type future: bool
:param future: If True, this will make all objects created in the
future public as well.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:raises ValueError:
If ``recursive`` is True, and the bucket contains more than 256
blobs. This is to prevent extremely long runtime of this
method. For such buckets, iterate over the blobs returned by
:meth:`list_blobs` and call
:meth:`~google.cloud.storage.blob.Blob.make_public`
for each blob.
"""
self.acl.all().grant_read()
self.acl.save(client=client)
if future:
doa = self.default_object_acl
if not doa.loaded:
doa.reload(client=client)
doa.all().grant_read()
doa.save(client=client)
if recursive:
blobs = list(self.list_blobs(
projection='full',
max_results=self._MAX_OBJECTS_FOR_ITERATION + 1,
client=client))
if len(blobs) > self._MAX_OBJECTS_FOR_ITERATION:
message = (
"Refusing to make public recursively with more than "
"%d objects. If you actually want to make every object "
"in this bucket public, iterate through the blobs "
"returned by 'Bucket.list_blobs()' and call "
"'make_public' on each one."
) % (self._MAX_OBJECTS_FOR_ITERATION,)
raise ValueError(message)
for blob in blobs:
blob.acl.all().grant_read()
blob.acl.save(client=client)
def make_private(self, recursive=False, future=False, client=None):
"""Update bucket's ACL, revoking read access for anonymous users.
:type recursive: bool
:param recursive: If True, this will make all blobs inside the bucket
private as well.
:type future: bool
:param future: If True, this will make all objects created in the
future private as well.
:type client: :class:`~google.cloud.storage.client.Client` or
``NoneType``
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:raises ValueError:
If ``recursive`` is True, and the bucket contains more than 256
blobs. This is to prevent extremely long runtime of this
method. For such buckets, iterate over the blobs returned by
:meth:`list_blobs` and call
:meth:`~google.cloud.storage.blob.Blob.make_private`
for each blob.
"""
self.acl.all().revoke_read()
self.acl.save(client=client)
if future:
doa = self.default_object_acl
if not doa.loaded:
doa.reload(client=client)
doa.all().revoke_read()
doa.save(client=client)
if recursive:
blobs = list(self.list_blobs(
projection='full',
max_results=self._MAX_OBJECTS_FOR_ITERATION + 1,
client=client))
if len(blobs) > self._MAX_OBJECTS_FOR_ITERATION:
message = (
'Refusing to make private recursively with more than '
'%d objects. If you actually want to make every object '
"in this bucket private, iterate through the blobs "
"returned by 'Bucket.list_blobs()' and call "
"'make_private' on each one."
) % (self._MAX_OBJECTS_FOR_ITERATION,)
raise ValueError(message)
for blob in blobs:
blob.acl.all().revoke_read()
blob.acl.save(client=client)
def generate_upload_policy(
self, conditions, expiration=None, client=None):
"""Create a signed upload policy for uploading objects.
This method generates and signs a policy document. You can use
`policy documents`_ to allow visitors to a website to upload files to
Google Cloud Storage without giving them direct write access.
For example:
.. literalinclude:: snippets.py
:start-after: [START policy_document]
:end-before: [END policy_document]
.. _policy documents:
https://cloud.google.com/storage/docs/xml-api\
/post-object#policydocument
:type expiration: datetime
:param expiration: Optional expiration in UTC. If not specified, the
policy will expire in 1 hour.
:type conditions: list
:param conditions: A list of conditions as described in the
`policy documents`_ documentation.
:type client: :class:`~google.cloud.storage.client.Client`
:param client: Optional. The client to use. If not passed, falls back
to the ``client`` stored on the current bucket.
:rtype: dict
:returns: A dictionary of (form field name, form field value) of form
fields that should be added to your HTML upload form in order
to attach the signature.
"""
client = self._require_client(client)
credentials = client._base_connection.credentials
_signing.ensure_signed_credentials(credentials)
if expiration is None:
expiration = _NOW() + datetime.timedelta(hours=1)
conditions = conditions + [
{'bucket': self.name},
]
policy_document = {
'expiration': _datetime_to_rfc3339(expiration),
'conditions': conditions,
}
encoded_policy_document = base64.b64encode(
json.dumps(policy_document).encode('utf-8'))
signature = base64.b64encode(
credentials.sign_bytes(encoded_policy_document))
fields = {
'bucket': self.name,
'GoogleAccessId': credentials.signer_email,
'policy': encoded_policy_document.decode('utf-8'),
'signature': signature.decode('utf-8'),
}
return fields
def lock_retention_policy(self, client=None):
"""Lock the bucket's retention policy.
:raises ValueError:
if the bucket has no metageneration (i.e., new or never reloaded);
if the bucket has no retention policy assigned;
if the bucket's retention policy is already locked.
"""
if 'metageneration' not in self._properties:
raise ValueError(
"Bucket has no retention policy assigned: try 'reload'?")
policy = self._properties.get('retentionPolicy')
if policy is None:
raise ValueError(
"Bucket has no retention policy assigned: try 'reload'?")
if policy.get('isLocked'):
raise ValueError("Bucket's retention policy is already locked.")
client = self._require_client(client)
query_params = {'ifMetagenerationMatch': self.metageneration}
if self.user_project is not None:
query_params['userProject'] = self.user_project
path = '/b/{}/lockRetentionPolicy'.format(self.name)
api_response = client._connection.api_request(
method='POST', path=path, query_params=query_params,
_target_object=self)
self._set_properties(api_response)
|
I may or may not have mentioned that April has been one of the busiest months ever. I am the kind of person who likes to relax on the weekends... or run. That's the same thing. Amiright?
People are always asking me, what are you doing this weekend? The answer is usually "nothing" or "running." But not in April. I had a major event every weekend and have barely had time to breathe. Between this blog, taking care of my clients, trying to squeeze in my own workouts on top of my packed schedule, let's just say it's been challenging. It was a fun month with a race-cation, a milestone anniversary, out of town visitors, a weekend fitness professional's convention, and concerts (and meeting my favorite rock star, no biggie). I plan to catch up on sleep and sanity in May.
For this week's "workout Wednesday" I am sharing three of my most popular quick strength training workouts that can be done before your run as part of a warmup or could be repeated as a circuit on non-running days. Strength training is important to becoming a well-rounded athlete. It will help you run stronger, faster and avoid muscle imbalances that can lead to injury.
You can download a printable PDF version of the workouts by entering your email address below. If you are already subscribed to the blog, then adding your email address again to get the download will not result in duplicate emails. Or just save to Pinterest for later.
strength training for runners. Save to your favorite Pinterest workout board for later.
Add your email address to download a printable PDF of all three strength training for runners workouts and to receive news and updates from the strength and running blog.
Questions? Need coaching? I’d love to help.
Newer PostWhat Kind Of Runner Are You?
|
#!/usr/bin/env python
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
# This script processes NSS .def files according to the rules defined in
# a comment at the top of each one. The files are used to define the
# exports from NSS shared libraries, with -DEFFILE on Windows, a linker
# script on Linux, or with -exported_symbols_list on OS X.
#
# The NSS build system processes them using a series of sed replacements,
# but the Mozilla build system is already running a Python script to generate
# the file so it's simpler to just do the replacement in Python.
import buildconfig
def main(output, input):
is_darwin = buildconfig.substs['OS_ARCH'] == 'Darwin'
with open(input, 'rb') as f:
for line in f:
line = line.rstrip()
# Remove all lines containing ';-'
if ';-' in line:
continue
# On OS X, remove all lines containing ';+'
if is_darwin and ';+' in line:
continue
# Remove the string ' DATA '.
line = line.replace(' DATA ', '')
# Remove the string ';+'
line = line.replace(';+', '')
# Remove the string ';;'
line = line.replace(';;', '')
# If a ';' is present, remove everything after it,
# and on OS X, remove it as well.
i = line.find(';')
if i != -1:
if is_darwin:
line = line[:i]
else:
line = line[:i+1]
# On OS X, symbols get an underscore in front.
if line and is_darwin:
output.write('_')
output.write(line)
output.write('\n')
|
Nothing better than these B2s for a casual yet chic sports-luxe look. Light and breathable, it’s impossible to go wrong with these.
This is a womens shoe. Available in sizes 36-41.
|
#!/usr/bin/env python
import argparse
import os
import sys
import pyrax
pyrax.set_setting("identity_type", "rackspace")
pyrax.set_credential_file(os.path.expanduser("~/.pyraxcreds"))
cm = pyrax.cloud_monitoring
auto = pyrax.autoscale
def get_entity(ip):
"""Create or get an entity."""
entities = cm.list_entities()
matches = [entity for entity in entities if ip in entity.ip_addresses]
if len(matches) == 1:
return matches[0]
else:
ent = cm.create_entity(label="%s-entity" % ip,
ip_addresses={"ip": ip})
return ent
def create_email_notification(args):
"""Create an email notification."""
entity = get_entity(args.ip)
# Create a check on our entity.
# This will do an HTTP GET request on the API every 60 seconds with
# a 10 second timeout.
check = cm.create_check(entity, label="my-check",
check_type="remote.http",
details={"url": "http://bikeshed.io/api/v1.0/color",
"method": "GET"},
period=60, timeout=10, # How often to check, and what timeout
monitoring_zones_poll=["mzdfw"], # Which DCs to check from
target_alias="ip" # The public IP for our entity
)
# Create an email notification.
email = cm.create_notification("email", label="my-email",
details={"address": "brian@python.org"})
# Create a notification plan that will email for all states.
plan = cm.create_notification_plan("my-plan", ok_state=email,
warning_state=email, critical_state=email)
# Create an alarm that will cause a critical state to be reached
# if our HTTP GET check returns a 500 status code.
alarm = cm.create_alarm(entity, check, plan,
"if (metric[\"code\"] == \"111\") { return new AlarmStatus(CRITICAL); }")
def create_webhook_notification(args):
"""Create a webhook notification."""
entity = get_entity(args.ip)
# Create a check on our entity.
# This will do an HTTP GET request on the API every 60 seconds with
# a 10 second timeout.
check = cm.create_check(entity, label="my-check",
check_type="remote.http",
details={"url": "http://bikeshed.io/api/v1.0/color",
"method": "GET"},
period=60, timeout=10, # How often to check, and what timeout
monitoring_zones_poll=["mzdfw"], # Which DCs to check from
target_alias="ip" # The public IP for our entity
)
# Now we bring up our autoscale scaling group.
group = auto.list()[0]
# Get our policy, which has the webhook.
policy = group.list_policies()[0]
# Get the hook out of the policy.
hook = policy.list_webhooks()[0]
# Create an email notification.
email = cm.create_notification("email", label="my-email",
details={"address": "brian@python.org"})
# Create a web hook notification with the HREF link in the hook.
webhook = cm.create_notification("webhook", label="my-webhook",
details={"url": hook.links[1]["href"]})
# Create another notification plan which will call our hook
plan = cm.create_notification_plan("my-webhook", ok_state=email,
warning_state=email, critical_state=webhook)
# Create an alarm
alarm = cm.create_alarm(entity, check, plan,
"if (metric[\"code\"] == \"111\") { return new AlarmStatus(CRITICAL); }")
def _main():
parser = argparse.ArgumentParser()
parser.add_argument("--ip")
subparsers = parser.add_subparsers()
email_notify = subparsers.add_parser("email-notify")
email_notify.set_defaults(func=create_email_notification)
webhook_notify = subparsers.add_parser("webhook-notify")
webhook_notify.set_defaults(func=create_webhook_notification)
args = parser.parse_args()
args.func(args)
return 0
if __name__ == "__main__":
sys.exit(_main())
|
Prevention and Pest control is our core business! The EWS Pest control specialists solve each pest problem professionally and discretely and offer (prevention) advice to prevent inconvenience and damage. As part of pest control, EWS Pest control provides the best result based on the principles of IPM (Integrated Pest Management).
Our quality takes shape through people who command their trade. The EWS pest control team is a team of well-educated and certified specialists (also known as IPM Service technicians) and are up-to-date on the applicable legislation. They keep their knowledge up-to-date through regular training, courses and continued training. This guarantees a professional method of working and a focus on the safety of people and planet.
|
"""Template tags relating to tickets."""
from django import template
import json
register = template.Library()
@register.filter()
def has_tickets(user, event):
"""Return True if the user has tickets for this event."""
if user.is_authenticated():
return event.tickets.filter(user=user, cancelled=False).count() > 0
return False
@register.filter()
def tickets(user, event):
"""Return the tickets the user has for the event."""
if user.is_authenticated():
return event.tickets.filter(user=user, cancelled=False)
return []
@register.filter()
def orders(user, event):
"""Return the orders the user has for the event."""
if user.is_authenticated():
return [o for o in event.orders.filter(user=user) if not o.cancelled]
return []
@register.filter()
def other_tickets(user, event):
"""Return the tickets the user has for the event that have no order."""
# All of this functionality is legacy and will be removed
if user.is_authenticated():
return event.tickets.filter(user=user, cancelled=False, order=None)
return []
@register.filter()
def visible_tickets_json(event, user):
"""Return json of available tickets for ticket widget."""
def ticket_type_to_dict(ticket_type, purchasable):
ret = {
"name": ticket_type.name,
"remaining_tickets": ticket_type.remaining_tickets,
"price": ticket_type.price,
"pk": ticket_type.pk}
if not purchasable:
ret["remaining_tickets"] = 0
return ret
return json.dumps([ticket_type_to_dict(t, t.purchasable_by(user)) for t in
event.ticket_types.visible_to(user)])
@register.filter()
def purchasable_by(ticket_type, user):
"""The ticket is/not purchasable by the user."""
return ticket_type.purchasable_by(user)
@register.filter()
def purchasable_tickets_no(event, user):
"""Return the number of tickets purchasable by a user for an event."""
return sum([t.remaining_tickets for t in
event.ticket_types.purchasable_by(user)])
@register.filter()
def waiting_list_available(event, user):
"""Return if waiting lists are available for this user."""
return len([t for t in event.ticket_types.waiting_list_available()
if t.visible_to(user)]) > 0
@register.filter()
def rsvp_going(user, event):
"""Return True if the user has indicated they will attend this event."""
if user.is_authenticated():
return user.rsvps.filter(event=event, going=True).count() > 0
return False
|
Stir-fried Noodles with Sliced Pork (Mì Xào Thịt Heo Lát) is again, one of my experiments and it is considered one of Best Vietnamese Food. Setting up this blog has resulted in me having to experiment with new recipes so that I can keep up with my postings.
Before you go make this Best Vietnamese Food, let me tell you first that it is more a dish/ recipe of convenience and simplicity than sheer cooking-from-scratch. So, the stewed pork slices come from the can and the noodles are from Feel free to use any type of instant noodles but I think it will work well with the noodles which are slightly flat, not unlike fettuccine or linguine. As for the result, it’s simply delicious.
Using a non-stick frying pan if available, fry the eggs in a couple of batches to form thin layers of fried egg. Cut the fried eggs to strips. Next, boil sufficient water in a pot. Cook the noodles till al-dente and then, run it over cold water. Place them in a bowl. Drizzle the sesame oil over the noodles and stir well. Place aside.
Heat oil and sauté garlic with carrots. Add fish sauce and about 50 ml of water. Open the tin of stewed pork slices and pork half of the stew into the carrots. Let the gravy simmer; add the rest of the water plus seasoning. Add the cooked noodles and bean sprouts and reduce to medium heat, stirring well to ensure noodles are coated with gravy. Cook for not more than 2 minutes. It is nearly complete the progress of making this Best Vietnamese Food.
Meanwhile, with the frying pan, heat up the pork slices with the remaining gravy from the tin till it begins to boil. Remove from heat. Pour gravy onto the noodles and mix well. Serve noodles onto individual plates and garnish with egg strips and pork slices. Hope you like this Best Vietnamese Food and have a good day.
|
# -*- coding: utf-8 -*-
# Copyright 2020 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
import proto # type: ignore
from google.ads.googleads.v8.enums.types import access_role as gage_access_role
from google.ads.googleads.v8.enums.types import response_content_type as gage_response_content_type
from google.ads.googleads.v8.resources.types import customer as gagr_customer
from google.protobuf import field_mask_pb2 # type: ignore
__protobuf__ = proto.module(
package='google.ads.googleads.v8.services',
marshal='google.ads.googleads.v8',
manifest={
'GetCustomerRequest',
'MutateCustomerRequest',
'CreateCustomerClientRequest',
'CustomerOperation',
'CreateCustomerClientResponse',
'MutateCustomerResponse',
'MutateCustomerResult',
'ListAccessibleCustomersRequest',
'ListAccessibleCustomersResponse',
},
)
class GetCustomerRequest(proto.Message):
r"""Request message for
[CustomerService.GetCustomer][google.ads.googleads.v8.services.CustomerService.GetCustomer].
Attributes:
resource_name (str):
Required. The resource name of the customer
to fetch.
"""
resource_name = proto.Field(
proto.STRING,
number=1,
)
class MutateCustomerRequest(proto.Message):
r"""Request message for
[CustomerService.MutateCustomer][google.ads.googleads.v8.services.CustomerService.MutateCustomer].
Attributes:
customer_id (str):
Required. The ID of the customer being
modified.
operation (google.ads.googleads.v8.services.types.CustomerOperation):
Required. The operation to perform on the
customer
validate_only (bool):
If true, the request is validated but not
executed. Only errors are returned, not results.
response_content_type (google.ads.googleads.v8.enums.types.ResponseContentTypeEnum.ResponseContentType):
The response content type setting. Determines
whether the mutable resource or just the
resource name should be returned post mutation.
"""
customer_id = proto.Field(
proto.STRING,
number=1,
)
operation = proto.Field(
proto.MESSAGE,
number=4,
message='CustomerOperation',
)
validate_only = proto.Field(
proto.BOOL,
number=5,
)
response_content_type = proto.Field(
proto.ENUM,
number=6,
enum=gage_response_content_type.ResponseContentTypeEnum.ResponseContentType,
)
class CreateCustomerClientRequest(proto.Message):
r"""Request message for
[CustomerService.CreateCustomerClient][google.ads.googleads.v8.services.CustomerService.CreateCustomerClient].
Attributes:
customer_id (str):
Required. The ID of the Manager under whom
client customer is being created.
customer_client (google.ads.googleads.v8.resources.types.Customer):
Required. The new client customer to create.
The resource name on this customer will be
ignored.
email_address (str):
Email address of the user who should be
invited on the created client customer.
Accessible only to customers on the allow-list.
access_role (google.ads.googleads.v8.enums.types.AccessRoleEnum.AccessRole):
The proposed role of user on the created
client customer. Accessible only to customers on
the allow-list.
validate_only (bool):
If true, the request is validated but not
executed. Only errors are returned, not results.
"""
customer_id = proto.Field(
proto.STRING,
number=1,
)
customer_client = proto.Field(
proto.MESSAGE,
number=2,
message=gagr_customer.Customer,
)
email_address = proto.Field(
proto.STRING,
number=5,
optional=True,
)
access_role = proto.Field(
proto.ENUM,
number=4,
enum=gage_access_role.AccessRoleEnum.AccessRole,
)
validate_only = proto.Field(
proto.BOOL,
number=6,
)
class CustomerOperation(proto.Message):
r"""A single update on a customer.
Attributes:
update (google.ads.googleads.v8.resources.types.Customer):
Mutate operation. Only updates are supported
for customer.
update_mask (google.protobuf.field_mask_pb2.FieldMask):
FieldMask that determines which resource
fields are modified in an update.
"""
update = proto.Field(
proto.MESSAGE,
number=1,
message=gagr_customer.Customer,
)
update_mask = proto.Field(
proto.MESSAGE,
number=2,
message=field_mask_pb2.FieldMask,
)
class CreateCustomerClientResponse(proto.Message):
r"""Response message for CreateCustomerClient mutate.
Attributes:
resource_name (str):
The resource name of the newly created
customer client.
invitation_link (str):
Link for inviting user to access the created
customer. Accessible to allowlisted customers
only.
"""
resource_name = proto.Field(
proto.STRING,
number=2,
)
invitation_link = proto.Field(
proto.STRING,
number=3,
)
class MutateCustomerResponse(proto.Message):
r"""Response message for customer mutate.
Attributes:
result (google.ads.googleads.v8.services.types.MutateCustomerResult):
Result for the mutate.
"""
result = proto.Field(
proto.MESSAGE,
number=2,
message='MutateCustomerResult',
)
class MutateCustomerResult(proto.Message):
r"""The result for the customer mutate.
Attributes:
resource_name (str):
Returned for successful operations.
customer (google.ads.googleads.v8.resources.types.Customer):
The mutated customer with only mutable fields after mutate.
The fields will only be returned when response_content_type
is set to "MUTABLE_RESOURCE".
"""
resource_name = proto.Field(
proto.STRING,
number=1,
)
customer = proto.Field(
proto.MESSAGE,
number=2,
message=gagr_customer.Customer,
)
class ListAccessibleCustomersRequest(proto.Message):
r"""Request message for
[CustomerService.ListAccessibleCustomers][google.ads.googleads.v8.services.CustomerService.ListAccessibleCustomers].
"""
class ListAccessibleCustomersResponse(proto.Message):
r"""Response message for
[CustomerService.ListAccessibleCustomers][google.ads.googleads.v8.services.CustomerService.ListAccessibleCustomers].
Attributes:
resource_names (Sequence[str]):
Resource name of customers directly
accessible by the user authenticating the call.
"""
resource_names = proto.RepeatedField(
proto.STRING,
number=1,
)
__all__ = tuple(sorted(__protobuf__.manifest))
|
The Wordpool Press Bookstore will now look a little different when users click on it from our website. In favor of a cleaner look and a multitude of advanced store options, we have moved to a different service provider. Users who access our store will now be able to participate in a complete buying experience, with view-able samples, a shopping cart, a wishlist, and an optional login. Check out the new Wordpool Press Bookstore here.
|
import re
from convert.base.generator import BaseGenerator
from convert.base.parsedobject import *
import datetime
class Generator(BaseGenerator):
def _generate_default_constructor(self):
if self.data.type == ParsedObjectType.Enum:
return ""
constructor = " def __init__(self):\n"
if self.data.data.__len__() == 0:
constructor += " pass\n"
else:
for member in self.data.data:
if member.type == ParsedObjectType.Array:
constructor += " self._{0} = []\n".format(_camel_case(member.name))
elif member.type == ParsedObjectType.String:
constructor += " self._{0} = \"\"\n".format(_camel_case(member.name))
elif member.type == ParsedObjectType.Int:
constructor += " self._{0} = 0\n".format(_camel_case(member.name))
elif member.type == ParsedObjectType.Float:
constructor += " self._{0} = 0.0\n".format(_camel_case(member.name))
elif member.type == ParsedObjectType.Object:
constructor += " self._{0} = None\n".format(_camel_case(member.name))
elif member.type == ParsedObjectType.Bool:
constructor += " self._{0} = False\n".format(_camel_case(member.name))
elif member.type == ParsedObjectType.Enum:
constructor += " self._{0} = {1}(0)\n".format(_camel_case(member.name), _capitalize(member.type_name))
constructor += "\n"
return constructor
def _generate_footer(self):
return ""
def _generate_member_access(self):
result = ""
for member in self.data.data:
result += self._generate_getter_setter(member)
return result
def _generate_header(self):
result = ""
# Enums only need to import enum, and won't have a factory
if self.data.type == ParsedObjectType.Enum:
result += "from enum import Enum\n"
else:
for factory in self.factories:
result += factory.generate_import()
for member in self.data.data:
if _capitalize(member.type_name) == _capitalize(self.data.type_name):
# if the member is the same class as the current class then we shouldn't import it
continue
if member.type == ParsedObjectType.Object or member.type == ParsedObjectType.Enum:
result += "from {0} import {1}\n".format(member.type_name.lower(), _capitalize(member.type_name))
elif member.type == ParsedObjectType.Array:
child = member.data[0]
if _capitalize(child.type_name) == _capitalize(self.data.type_name):
continue
if child.type == ParsedObjectType.Object:
result += "from {0} import {1}\n".format(child.type_name.lower(), _capitalize(child.type_name))
date_str = "Date: {0}".format(datetime.date.today())
if BaseGenerator.skip_date_comment:
date_str = ""
date_str = date_str.ljust(82)
result += ("#####################################################################################\n"
"# This file is generated by Json2Class (https://github.com/DragonSpawn/Json2Class) #\n"
"# Modifications to this file will be lost the next time you run the tool. #\n"
"# {0}#\n"
"#####################################################################################\n\n").format(date_str)
inheritance_str = "object"
if self.data.type == ParsedObjectType.Enum:
inheritance_str = "Enum"
result += "\nclass {0}({1}):\n".format(_capitalize(self.data.type_name), inheritance_str)
return result
def file_name(self, json_name):
return json_name.lower() + ".py"
def _generate_getter_setter(self, member):
if self.data.type == ParsedObjectType.Enum:
return " {0} = {1}\n".format(_capitalize(member.name), member.data)
return (" @property\n"
" def {0}(self):\n"
" \"\"\":rtype: {1}\"\"\"\n"
" return self._{0}\n\n"
" @{0}.setter\n"
" def {0}(self, value):\n"
" \"\"\":type value: {1}\n"
" :rtype: None\"\"\"\n"
" self._{0} = value\n\n").format(_camel_case(member.name), _get_type_name(member))
def _camel_case(obj):
a = re.compile('((?<=[a-z0-9])[A-Z]|(?!^)[A-Z](?=[a-z]))')
return a.sub(r'_\1', obj).lower()
def _capitalize(obj):
"""
Returns the object name with the first letter capitalized (all other untouched).
:param obj:
:return:
"""
if obj.__len__() < 2:
return obj
if obj == "string" or obj == "float" or obj == "int":
return obj
return obj[0].upper() + obj[1:]
def _get_type_name(obj):
if obj.type == ParsedObjectType.String:
return "str"
if obj.type == ParsedObjectType.Object or obj.type == ParsedObjectType.Enum:
return _capitalize(obj.type_name)
if obj.type == ParsedObjectType.Array:
return "list of [{0}]".format(_get_type_name(obj.data[0]))
return obj.type.name.lower()
|
Oh my…… Where to begin with this one…. It’s another book based of the a episode… But this was…. It was bad, I gave it 3 stars on Goodreads. Mostly because it had some good points like: Like the cross over with the classic Star Trek and Next Generation…. Data was rather funny in this book… That’s about it.
It’s sad in a way really.. I’m just surprised that this book was okay to be published.. I’d say, stay far away from this book!
Based on the epic two-part television episode, here now is the story STAR TREK fans have awaited for five long years, the story that bring together Spock — the enigmatic Vulcan who personified the original, classic STAR TREK — with the crew of the Next Generation. Screenwriter Teri Taylor brings all the excitement and wonder that have captivated fans of the smash television series STAR TREK: THE NEXT GENERATION to this story of Spock’s forbidden journey into the heart of the Romulan Empire — and the U.S.S. Enterprise’s desperate attempts to discover the reasons for his mission there. Join now with Captain Picard, Lieutenant Commander Data, and the rest of the Next Generation crew on a voyage of unsurpassed adventure, a voyage that brings them to the edge of history — and forces them to confront a shattering betrayal!
I really wanted to stop reading this book. It was that bad… Now that I’m writing about this book. I think I’m going to change my rating on Goodreads. It’s not worth 3 stars…. So if you want a lackluster, bad plot line, trash harlequin Star Trek novel. This is for you!! Odd thing is, the episode was great! Rather dropped the ball on this one.
So tired! But I can’t go to sleep…… Its 4am…. AM! I’ve read 6 books already and working on seven. Aside note: there will a number of book reviews coming up on here and my goodreads profile.
I have nothing to do tomorrow aside from work. The kids (hobbits I like to call them) are at grandma’s place till Sunday. Maybe its just too quite, that’s why I can’t sleep….. Oh well.
Of you guys find time, you should check out Jack 1939. Its an interesting read.
|
#!/usr/bin/env python
#coding: utf8
import os, sys, json
BASE_PATH = os.path.dirname(os.path.abspath(__file__))
# Solc Compiler
SOLC = "solc"
build_dir = os.path.join(BASE_PATH, "build")
src_dir = os.path.join(BASE_PATH, "src")
dst_dir = os.path.join(BASE_PATH, "build/src")
bin_dir = os.path.join(BASE_PATH, "build/bin")
abi_dir = os.path.join(BASE_PATH, "build/abi")
ast_dir = os.path.join(BASE_PATH, "build/ast")
src_entry = os.path.join(src_dir, "main.sol")
def rmdir(path):
for root, dirs, files in os.walk(path, topdown=False):
for name in files:
os.remove(os.path.join(root, name))
for name in dirs:
os.rmdir(os.path.join(root, name))
os.rmdir(path)
def diff_path():
if not os.path.exists(build_dir):
os.mkdir(build_dir)
if not os.path.exists(dst_dir):
os.mkdir(dst_dir)
if not os.path.exists(bin_dir):
os.mkdir(bin_dir)
if not os.path.exists(abi_dir):
os.mkdir(abi_dir)
if not os.path.exists(ast_dir):
os.mkdir(ast_dir)
assert(os.path.exists(build_dir) and os.path.isdir(build_dir) )
assert(os.path.exists(src_dir) and os.path.isdir(src_dir) )
assert(os.path.exists(dst_dir) and os.path.isdir(dst_dir) )
assert(os.path.exists(bin_dir) and os.path.isdir(bin_dir) )
assert(os.path.exists(abi_dir) and os.path.isdir(abi_dir) )
assert(os.path.exists(ast_dir) and os.path.isdir(ast_dir) )
src_paths = map(lambda (root, dirs, files): root.replace(src_dir, ""), os.walk(src_dir) )
dst_paths = map(lambda (root, dirs, files): root.replace(dst_dir, ""), os.walk(dst_dir) )
_paths = filter(lambda p: p not in src_paths, dst_paths)
paths = map(lambda p: os.path.join(dst_dir, p[1:] if p.startswith("/") else p ), _paths )
map(lambda p: rmdir(p), paths )
_paths = filter(lambda p: p not in dst_paths, src_paths)
paths = map(lambda p: os.path.join(dst_dir, p[1:] if p.startswith("/") else p ), _paths )
map(lambda p: os.mkdir(p), paths)
def clean_dst_path():
rmdir(dst_dir)
os.mkdir(dst_dir)
def find_compilers():
paths = os.environ["PATH"].split(":")
solc = filter(lambda p: os.path.exists(os.path.join(p, "solc")) and os.path.isfile(os.path.join(p, "solc")), paths)
# os.path.exists(os.path.join(p, "solcjs")) and os.path.isfile(os.path.join(p, "solcjs"))
serpent = filter(lambda p: os.path.exists(os.path.join(p, "serpent")) and os.path.isfile(os.path.join(p, "serpent")), paths)
lllc = filter(lambda p: os.path.exists(os.path.join(p, "lllc")) and os.path.isfile(os.path.join(p, "lllc")), paths)
result = []
if len(solc) > 0:
result.append("Solidity")
if len(serpent) > 0:
result.append("Serpent")
if len(lllc) > 0:
result.append("LLL")
return result
def complie_soldity():
"""
solc --optimize --bin -o ./build/bin contract.sol
solc --optimize --ast -o ./build/ast contract.sol
solc --optimize --abi -o ./build contract.sol
"""
assert(os.path.exists(src_entry) and os.path.isfile(src_entry) )
commands = [
[SOLC, "--optimize", "--bin", "-o", os.path.relpath(bin_dir), os.path.relpath(src_entry) ]
, [SOLC, "--optimize", "--ast", "-o", os.path.relpath(ast_dir), os.path.relpath(src_entry) ]
, [SOLC, "--optimize", "--abi", "-o", os.path.relpath(build_dir), os.path.relpath(src_entry) ]
]
print("======================Complie Solidity Language=========================")
for cmd in commands:
command = " ".join(cmd)
print(command)
os.system(command)
# result = map(lambda cmd: os.system(" ".join(cmd)), commands )
# print(result)
def restruct():
contract = {}
bin_files = reduce(lambda a, (root, dirs, files): a + map(lambda filename: os.path.join(root, filename), files ), os.walk(bin_dir), [] )
abi_files = reduce(lambda a, (root, dirs, files): a + map(lambda filename: os.path.join(root, filename), files ), os.walk(dst_dir), [] )
def path_handle(data, filepath):
_, filename = os.path.split(filepath)
assert(filename.endswith(".bin") or filename.endswith(".abi") )
if filename.endswith(".bin"):
key = "code"
elif filename.endswith(".abi"):
key = "interface"
else:
pass
object_name = filename[:-4]
_tmp = object_name.split(":")
if len(_tmp) > 1:
object_name = _tmp[-1]
if object_name not in data or type(data[object_name]) != dict:
data[object_name] = {}
if key not in data[object_name]:
res = open(filepath, "rb").read()
if key == "interface":
open(os.path.join(abi_dir, object_name+".abi"), "wb").write(res)
data[object_name][key] = json.loads(res)
elif key == "code":
res = "0x" + res
data[object_name][key] = res
else:
data[object_name][key] = res
return data
data = reduce(path_handle, abi_files, reduce(path_handle, bin_files, {}) )
print("======================Contract=========================")
output = json.dumps(data)
open(os.path.join(build_dir, "contract.json"), "wb").write(output)
print(output)
def usage():
message = """
$ python solidity.py -src ./src -entry main.sol -out ./build -target contract.json
-src solidity source dir
-entry source entry file
-out output dir
-target solidity bytecode and interface file (JSON Format)
--help show this help text
"""
print(message)
def main():
compilers = find_compilers()
print("====================Compilers====================")
print(compilers)
assert("Solidity" in compilers)
clean_dst_path()
diff_path()
complie_soldity()
restruct()
if __name__ == '__main__':
main()
|
« The Tech-Army Proudly Welcomes Geremy and Security Managed!
This entry was posted in Computer Service, contractor, Engineers, independent, independent contractor, Installation, network engineer, Network Maintenance, PC Maintenance, PC Service, Tech, tech army organization, Tech Army Troops, Tech-Army, Tech-Army Members, Techs 2 Go, Training, Travel and tagged AV/computer rental industry, CCTV security camera installer, computer industry, contractor, Facebook, homebuilders industry, independent, independent contractor, instagram, IP camera installer, LinkedIn, pinterest, social media, technology expert, tradesman, twitter. Bookmark the permalink. Post a comment or leave a trackback: Trackback URL.
|
# xorn.geda - Python library for manipulating gEDA files
#**********************************************************************
# _ _ __ _ _
# __ _ _ __ ___| |_ | |__ __ _ ___ ___ / /_ | or |
# / _` | '_ \ / _ \ __| | '_ \ / _` / __|/ _ \ '_ \| or |_
# | (_| | | | | __/ |_ | |_) | (_| \__ \ __/ (_) |__ _|
# \__, |_| |_|\___|\__| |_.__/ \__,_|___/\___|\___/ |_|
# |___/
#
# created by Alfred Reibenschuh <alfredreibenschuh@gmx.net>,
# under the "GNU Library General Public License" (see below).
#
#**********************************************************************
# Copyright (C) 2003 Free Software Foundation
# Copyright (C) 2013-2017 Roland Lutz
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
## \namespace xorn.base64
## Reading and writing base64-encoded data
from gettext import gettext as _
BASE64 = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
PAD64 = '='
RANK = [
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 0x00-0x0f
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 0x10-0x1f
255,255,255,255,255,255,255,255,255,255,255, 62,255,255,255, 63, # 0x20-0x2f
52, 53, 54, 55, 56, 57, 58, 59, 60, 61,255,255,255,255,255,255, # 0x30-0x3f
255, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, # 0x40-0x4f
15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25,255,255,255,255,255, # 0x50-0x5f
255, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, # 0x60-0x6f
41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51,255,255,255,255,255, # 0x70-0x7f
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 0x80-0x8f
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 0x90-0x9f
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 0xa0-0xaf
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 0xb0-0xbf
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 0xc0-0xcf
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 0xd0-0xdf
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 0xe0-0xef
255,255,255,255,255,255,255,255,255,255,255,255,255,255,255,255, # 0xf0-0xff
]
## Write a binary string to a file in %base64 representation.
#
# If \a columns is not \c None, insert a newline every \a columns
# characters. This is required by RFC 2045, but some applications
# don't require it. \a columns must positive and a multiple of \c 4.
#
# If \a delim is not \c None, it is written on a separate line after
# the data. This argument is provided for symmetry with \ref decode.
#
# \return \c None.
def encode(f, src, columns = 72, delim = None):
# bulk encoding
blen = len(src) - len(src) % 3
ocnt = 0
for pos in xrange(0, blen, 3):
# Convert 3 bytes of src to 4 bytes of output
#
# output[0] = input[0] 7:2
# output[1] = input[0] 1:0 input[1] 7:4
# output[2] = input[1] 3:0 input[2] 7:6
# output[3] = input[1] 5:0
i0, i1, i2 = [ord(ch) for ch in src[pos:pos + 3]]
# Map output to the Base64 alphabet
f.write(BASE64[i0 >> 2] +
BASE64[((i0 & 0x03) << 4) + (i1 >> 4)] +
BASE64[((i1 & 0x0f) << 2) + (i2 >> 6)] +
BASE64[i2 & 0x3f])
if columns is not None:
ocnt += 1
if ocnt % (columns / 4) == 0 and pos != len(src) - 3:
f.write('\n')
# Now worry about padding with remaining 1 or 2 bytes
if blen != len(src):
i0 = ord(src[blen])
if blen == len(src) - 1:
i1 = 0
else:
i1 = ord(src[blen + 1])
i2 = 0
f.write(BASE64[i0 >> 2] +
BASE64[((i0 & 0x03) << 4) + (i1 >> 4)])
if blen == len(src) - 1:
f.write(PAD64)
else:
f.write(BASE64[((i1 & 0x0f) << 2) + (i2 >> 6)])
f.write(PAD64)
if src:
f.write('\n')
if delim is not None:
f.write(delim + '\n')
## Raised when reading invalid or unterminated base64-encoded data.
class DecodingError(Exception):
pass
## Read a string in %base64 representation from a file.
#
# This function is liberal in what it will accept. It ignores
# non-base64 symbols.
#
# If \a delim is \c None, read until the end of the file. If \a delim
# is not \c None, read until a line containing exactly \a delim is
# found.
#
# \return A string containing the decoded data.
#
# \throw DecodingError if reading something that is not valid
# base64-encoded data
# \throw DecodingError if the end of the file is hit and \a delim is
# not \c None
def decode(f, delim = None):
ch = 0
state = 0
res = 0
dst = []
pad = 0
while True:
try:
line = f.next()
except StopIteration:
if delim is not None:
raise DecodingError, _("Unexpected end-of-file")
break
if delim is not None and line == delim + '\n':
break
for ch in line:
if ch == PAD64:
pad += 1
continue
pos = RANK[ord(ch)]
if pos == 255:
# Skip any non-base64 anywhere
continue
if pad != 0:
raise DecodingError
if state == 0:
dst += [pos << 2]
state = 1
elif state == 1:
dst[-1] |= pos >> 4
res = (pos & 0x0f) << 4
state = 2
elif state == 2:
dst += [res | (pos >> 2)]
res = (pos & 0x03) << 6
state = 3
elif state == 3:
dst += [res | pos]
state = 0
# We are done decoding Base-64 chars. Let's see if we ended
# on a byte boundary, and/or with erroneous trailing characters.
if pad != 0:
# We got a pad char.
if state == 0:
# Invalid = in first position
raise DecodingError
elif state == 1:
# Invalid = in second position
raise DecodingError
elif state == 2:
# Valid, means one byte of info
# Make sure there is another trailing = sign.
if pad != 2:
raise DecodingError
elif state == 3:
# Valid, means two bytes of info
# We know this char is an =. Is there anything but
# whitespace after it?
if pad != 1:
raise DecodingError
if state == 2 or state == 3:
# Now make sure for cases 2 and 3 that the "extra"
# bits that slopped past the last full byte were
# zeros. If we don't check them, they become a
# subliminal channel.
if res != 0:
raise DecodingError
else:
# We ended by seeing the end of the string. Make sure we
# have no partial bytes lying around.
if state != 0:
raise DecodingError
return ''.join(chr(b) for b in dst)
|
The team, co-led by Vivienne Sze, associate professor in MIT's Department of Electrical Engineering and Computer Science (EECS), and Sertac Karaman, the Class of 1948 Career Development Associate Professor of Aeronautics and Astronautics, built a fully customized chip from the ground up, with a focus on reducing power consumption and size while also increasing processing speed.
The new computer chip, named Navion, which they are presenting this week at the Symposia on VLSI Technology and Circuits, is just 20 square millimeters—about the size of a LEGO minifigure’s footprint—and consumes just 24 milliwatts of power, or about 1 one-thousandth the energy required to power a lightbulb.
Using this tiny amount of power, the chip is able to process in real-time camera images at up to 171 frames per second, as well as inertial measurements, both of which it uses to determine where it is in space. The researchers say the chip can be integrated into nanodrones as small as a fingernail, to help the vehicles navigate, particularly in remote or inaccessible places where global positioning satellite data is unavailable.
The chip design can also be run on any small robot or device that needs to navigate over long stretches of time on a limited power supply.
Sze and Karaman’s co-authors are EECS graduate student Amr Suleiman, who is the lead author; EECS graduate student Zhengdong Zhang; and Luca Carlone, who was a research scientist during the project and is now an assistant professor in MIT’s Department of Aeronautics and Astronautics.
In the past few years, multiple research groups have engineered miniature drones small enough to fit in the palm of your hand. Scientists envision that such tiny vehicles can fly around and snap pictures of your surroundings, like mosquito-sized photographers or surveyors, before landing back in your palm, where they can then be easily stored away.
But a palm-sized drone can only carry so much battery power, most of which is used to make its motors fly, leaving very little energy for other essential operations, such as navigation, and, in particular, state estimation, or a robot’s ability to determine where it is in space.
In their previous work, Sze and Karaman began to address such issues by combining algorithms and hardware in a single chip. Their initial design was implemented on a field-programmable gate array, or FPGA, a commercial hardware platform that can be configured to a given application. The chip was able to perform state estimation using 2 watts of power, compared to larger, standard drones that typically require 10 to 30 watts to perform the same tasks. Still, the chip’s power consumption was greater than the total amount of power that miniature drones can typically carry, which researchers estimate to be about 100 milliwatts.
To shrink the chip further, in both size and power consumption, the team decided to build a chip from the ground up rather than reconfigure an existing design. “This gave us a lot more flexibility in the design of the chip,” Sze says.
To reduce the chip’s power consumption, the group came up with a design to minimize the amount of data — in the form of camera images and inertial measurements — that is stored on the chip at any given time. The design also optimizes the way this data flows across the chip.
“Any of the images we would’ve temporarily stored on the chip, we actually compressed so it required less memory,” says Sze, who is a member of the Research Laboratory of Electronics at MIT. The team also cut down on extraneous operations, such as the computation of zeros, which results in a zero. The researchers found a way to skip those computational steps involving any zeros in the data. “This allowed us to avoid having to process and store all those zeros, so we can cut out a lot of unnecessary storage and compute cycles, which reduces the chip size and power, and increases the processing speed of the chip,” Sze says.
Through their design, the team was able to reduce the chip’s memory from its previous 2 megabytes, to about 0.8 megabytes. The team tested the chip on previously collected datasets generated by drones flying through multiple environments, such as office and warehouse-type spaces.
“While we customized the chip for low power and high-speed processing, we also made it sufficiently flexible so that it can adapt to these different environments for additional energy savings,” Sze says. “The key is finding the balance between flexibility and efficiency.” The chip can also be reconfigured to support different cameras and inertial measurement unit (IMU) sensors.
From these tests, the researchers found they were able to bring down the chip’s power consumption from 2 watts to 24 milliwatts, and that this was enough to power the chip to process images at 171 frames per second — a rate that was even faster than what the datasets projected.
The team plans to demonstrate its design by implementing its chip on a miniature race car. While a screen displays an onboard camera’s live video, the researchers also hope to show the chip determining where it is in space, in real-time, as well as the amount of power that it uses to perform this task. Eventually, the team plans to test the chip on an actual drone, and ultimately on a miniature drone.
This research was supported, in part, by the Air Force Office of Scientific Research, and by the National Science Foundation.
|
#!/usr/bin/python3
import gi
gi.require_version('CDesktopEnums', '3.0')
from gi.repository import Gtk, Gdk, CScreensaver, CDesktopEnums, GObject
import random
import status
import constants as c
import singletons
from monitorView import MonitorView
from unlock import UnlockDialog
from clock import ClockWidget
from albumArt import AlbumArt
from audioPanel import AudioPanel
from infoPanel import InfoPanel
from osk import OnScreenKeyboard
from floating import ALIGNMENTS
from util import utils, trackers, settings
from util.eventHandler import EventHandler
class Stage(Gtk.Window):
"""
The Stage is the toplevel window of the entire screensaver while
in Active mode.
It's the first thing made, the last thing destroyed, and all other
widgets live inside of it (or rather, inside the GtkOverlay below)
It is Gtk.WindowType.POPUP to avoid being managed/composited by muffin,
and to prevent animation during its creation and destruction.
The Stage reponds pretty much only to the instructions of the
ScreensaverManager.
"""
def __init__(self, manager, away_message):
if status.InteractiveDebug:
Gtk.Window.__init__(self,
type=Gtk.WindowType.TOPLEVEL,
decorated=True,
skip_taskbar_hint=False)
else:
Gtk.Window.__init__(self,
type=Gtk.WindowType.POPUP,
decorated=False,
skip_taskbar_hint=True)
self.get_style_context().add_class("csstage")
trackers.con_tracker_get().connect(singletons.Backgrounds,
"changed",
self.on_bg_changed)
self.destroying = False
self.manager = manager
status.screen = CScreensaver.Screen.new(status.Debug)
self.away_message = away_message
self.monitors = []
self.last_focus_monitor = -1
self.overlay = None
self.clock_widget = None
self.albumart_widget = None
self.unlock_dialog = None
self.audio_panel = None
self.info_panel = None
self.stage_refresh_id = 0
self.floaters = []
self.event_handler = EventHandler(manager)
self.get_style_context().remove_class("background")
self.set_events(self.get_events() |
Gdk.EventMask.POINTER_MOTION_MASK |
Gdk.EventMask.BUTTON_PRESS_MASK |
Gdk.EventMask.BUTTON_RELEASE_MASK |
Gdk.EventMask.KEY_PRESS_MASK |
Gdk.EventMask.KEY_RELEASE_MASK |
Gdk.EventMask.EXPOSURE_MASK |
Gdk.EventMask.VISIBILITY_NOTIFY_MASK |
Gdk.EventMask.ENTER_NOTIFY_MASK |
Gdk.EventMask.LEAVE_NOTIFY_MASK |
Gdk.EventMask.FOCUS_CHANGE_MASK)
c = Gdk.RGBA(0, 0, 0, 0)
self.override_background_color (Gtk.StateFlags.NORMAL, c);
self.update_geometry()
self.overlay = Gtk.Overlay()
trackers.con_tracker_get().connect(self.overlay,
"realize",
self.on_realized)
trackers.con_tracker_get().connect(self.overlay,
"get-child-position",
self.position_overlay_child)
self.overlay.show_all()
self.add(self.overlay)
# We hang onto the UPowerClient here so power events can
# trigger changes to the info panel.
self.power_client = singletons.UPowerClient
trackers.con_tracker_get().connect(self.power_client,
"power-state-changed",
self.on_power_state_changed)
# This filter suppresses any other windows that might share
# our window group in muffin, from showing up over the Stage.
# For instance: Chrome and Firefox native notifications.
self.gdk_filter = CScreensaver.GdkEventFilter()
trackers.con_tracker_get().connect(status.screen,
"size-changed",
self.on_screen_size_changed)
trackers.con_tracker_get().connect(status.screen,
"monitors-changed",
self.on_monitors_changed)
trackers.con_tracker_get().connect(status.screen,
"composited-changed",
self.on_composited_changed)
trackers.con_tracker_get().connect(self,
"grab-broken-event",
self.on_grab_broken_event)
if status.InteractiveDebug:
self.set_interactive_debugging(True)
def update_monitors(self):
self.destroy_monitor_views()
try:
self.setup_monitors()
for monitor in self.monitors:
self.sink_child_widget(monitor)
except Exception as e:
print("Problem updating monitor views views: %s" % str(e))
def on_screen_size_changed(self, screen, data=None):
"""
The screen changing size should be acted upon immediately, to ensure coverage.
Wallpapers are secondary.
"""
if status.Debug:
print("Stage: Received screen size-changed signal, refreshing stage")
self.update_geometry()
self.move_onscreen()
self.overlay.queue_resize()
def on_monitors_changed(self, screen, data=None):
"""
Updating monitors also will trigger an immediate stage coverage update (same
as on_screen_size_changed), and follow up at idle with actual monitor view
refreshes (wallpapers.)
"""
if status.Debug:
print("Stage: Received screen monitors-changed signal, refreshing stage")
self.update_geometry()
self.move_onscreen()
self.overlay.queue_resize()
Gdk.flush()
self.queue_refresh_stage()
def on_composited_changed(self, screen, data=None):
if self.get_realized():
user_time = self.get_display().get_user_time()
self.hide()
self.unrealize()
self.realize()
self.get_window().set_user_time(user_time)
self.show()
GObject.idle_add(self.manager.grab_stage)
def on_grab_broken_event(self, widget, event, data=None):
GObject.idle_add(self.manager.grab_stage)
return False
def queue_refresh_stage(self):
"""
Queues a complete refresh of the stage, resizing the screen if necessary,
reconstructing the individual monitor objects, etc...
"""
if self.stage_refresh_id > 0:
GObject.source_remove(self.stage_refresh_id)
self.stage_refresh_id = 0
self.stage_refresh_id = GObject.idle_add(self._update_full_stage_on_idle)
def _update_full_stage_on_idle(self, data=None):
self.stage_refresh_id = 0
self._refresh()
return False
def _refresh(self):
Gdk.flush()
if status.Debug:
print("Stage: refresh callback")
self.update_geometry()
self.move_onscreen()
self.update_monitors()
self.overlay.queue_resize()
def activate(self, callback):
"""
This is the primary way of making the Stage visible.
"""
self.set_opacity(1.0)
self.move_onscreen()
self.show()
callback()
def deactivate(self, callback):
"""
This is the primary way of destroying the stage.
"""
self.hide()
callback()
def on_realized(self, widget):
"""
Repositions the window when it is realized, to cover the entire
GdkScreen (a rectangle exactly encompassing all monitors.)
From here we also proceed to construct all overlay children and
activate our window suppressor.
"""
window = self.get_window()
utils.override_user_time(window)
self.setup_children()
self.gdk_filter.start(self)
trackers.con_tracker_get().disconnect(self.overlay,
"realize",
self.on_realized)
def move_onscreen(self):
w = self.get_window()
if w:
w.move_resize(self.rect.x,
self.rect.y,
self.rect.width,
self.rect.height)
self.move(self.rect.x, self.rect.y)
self.resize(self.rect.width, self.rect.height)
def deactivate_after_timeout(self):
self.manager.set_active(False)
def setup_children(self):
"""
Creates all of our overlay children. If a new 'widget' gets added,
this should be the setup point for it.
We bail if something goes wrong on a critical widget - a monitor view or
unlock widget.
"""
total_failure = False
try:
self.setup_monitors()
except Exception as e:
print("Problem setting up monitor views: %s" % str(e))
total_failure = True
try:
self.setup_unlock()
except Exception as e:
print("Problem setting up unlock dialog: %s" % str(e))
total_failure = True
if not total_failure:
try:
self.setup_clock()
except Exception as e:
print("Problem setting up clock widget: %s" % str(e))
self.clock_widget = None
try:
self.setup_albumart()
except Exception as e:
print("Problem setting up albumart widget: %s" % str(e))
self.albumart_widget = None
try:
self.setup_status_bars()
except Exception as e:
print("Problem setting up status bars: %s" % str(e))
self.audio_panel = None
self.info_panel = None
try:
self.setup_osk()
except Exception as e:
print("Problem setting up on-screen keyboard: %s" % str(e))
self.osk = None
if total_failure:
print("Total failure somewhere, deactivating screensaver.")
GObject.idle_add(self.deactivate_after_timeout)
def destroy_children(self):
try:
self.destroy_monitor_views()
except Exception as e:
print(e)
try:
if self.unlock_dialog != None:
self.unlock_dialog.destroy()
except Exception as e:
print(e)
try:
if self.clock_widget != None:
self.clock_widget.stop_positioning()
self.clock_widget.destroy()
except Exception as e:
print(e)
try:
if self.albumart_widget != None:
self.albumart_widget.stop_positioning()
self.albumart_widget.destroy()
except Exception as e:
print(e)
try:
if self.info_panel != None:
self.info_panel.destroy()
except Exception as e:
print(e)
try:
if self.info_panel != None:
self.audio_panel.destroy()
except Exception as e:
print(e)
try:
if self.osk != None:
self.osk.destroy()
except Exception as e:
print(e)
self.unlock_dialog = None
self.clock_widget = None
self.albumart_widget = None
self.info_panel = None
self.audio_panel = None
self.osk = None
self.away_message = None
self.monitors = []
self.floaters = []
def destroy_stage(self):
"""
Performs all tear-down necessary to destroy the Stage, destroying
all children in the process, and finally destroying itself.
"""
trackers.con_tracker_get().disconnect(singletons.Backgrounds,
"changed",
self.on_bg_changed)
trackers.con_tracker_get().disconnect(self.power_client,
"power-state-changed",
self.on_power_state_changed)
trackers.con_tracker_get().disconnect(self,
"grab-broken-event",
self.on_grab_broken_event)
self.set_timeout_active(None, False)
self.destroy_children()
self.gdk_filter.stop()
self.gdk_filter = None
trackers.con_tracker_get().disconnect(status.screen,
"size-changed",
self.on_screen_size_changed)
trackers.con_tracker_get().disconnect(status.screen,
"monitors-changed",
self.on_monitors_changed)
trackers.con_tracker_get().disconnect(self.overlay,
"get-child-position",
self.position_overlay_child)
self.destroy()
status.screen = None
def setup_monitors(self):
"""
Iterate through the monitors, and create MonitorViews for each one
to cover them.
"""
self.monitors = []
status.Spanned = settings.bg_settings.get_enum("picture-options") == CDesktopEnums.BackgroundStyle.SPANNED
if status.InteractiveDebug or status.Spanned:
monitors = (status.screen.get_primary_monitor(),)
else:
n = status.screen.get_n_monitors()
monitors = ()
for i in range(n):
monitors += (i,)
for index in monitors:
monitor = MonitorView(index)
image = Gtk.Image()
singletons.Backgrounds.create_and_set_gtk_image (image,
monitor.rect.width,
monitor.rect.height)
monitor.set_next_wallpaper_image(image)
self.monitors.append(monitor)
self.add_child_widget(monitor)
self.update_monitor_views()
def on_bg_changed(self, bg):
"""
Callback for our GnomeBackground instance, this tells us when
the background settings have changed, so we can update our wallpaper.
"""
for monitor in self.monitors:
image = Gtk.Image()
singletons.Backgrounds.create_and_set_gtk_image (image,
monitor.rect.width,
monitor.rect.height)
monitor.set_next_wallpaper_image(image)
def on_power_state_changed(self, client, data=None):
"""
Callback for UPower changes, this will make our MonitorViews update
themselves according to user setting and power state.
"""
if status.Debug:
print("stage: Power state changed, updating info panel")
self.info_panel.update_visibility()
def setup_clock(self):
"""
Construct the clock widget and add it to the overlay, but only actually
show it if we're a) Not running a plug-in, and b) The user wants it via
preferences.
Initially invisible, regardless - its visibility is controlled via its
own positioning timer.
"""
self.clock_widget = ClockWidget(self.away_message, status.screen.get_mouse_monitor(), status.screen.get_low_res_mode())
self.add_child_widget(self.clock_widget)
self.floaters.append(self.clock_widget)
if settings.get_show_clock():
self.clock_widget.start_positioning()
def setup_albumart(self):
"""
Construct the AlbumArt widget and add it to the overlay, but only actually
show it if we're a) Not running a plug-in, and b) The user wants it via
preferences.
Initially invisible, regardless - its visibility is controlled via its
own positioning timer.
"""
self.albumart_widget = AlbumArt(None, status.screen.get_mouse_monitor())
self.add_child_widget(self.albumart_widget)
self.floaters.append(self.clock_widget)
if settings.get_show_albumart():
self.albumart_widget.start_positioning()
def setup_osk(self):
self.osk = OnScreenKeyboard()
self.add_child_widget(self.osk)
def setup_unlock(self):
"""
Construct the unlock dialog widget and add it to the overlay. It will always
initially be invisible.
Any time the screensaver is awake, and the unlock dialog is raised, a timer runs.
After a certain elapsed time, the state will be reset, and the dialog will be hidden
once more. Mouse and key events reset this timer, and the act of authentication
temporarily suspends it - the unlock widget accomplishes this via its inhibit- and
uninhibit-timeout signals
We also listen to actual authentication events, to destroy the stage if there is success,
and to do something cute if we fail (for now, this consists of 'blinking' the unlock
dialog.)
"""
self.unlock_dialog = UnlockDialog()
self.set_default(self.unlock_dialog.auth_unlock_button)
self.add_child_widget(self.unlock_dialog)
# Prevent a dialog timeout during authentication
trackers.con_tracker_get().connect(self.unlock_dialog,
"inhibit-timeout",
self.set_timeout_active, False)
trackers.con_tracker_get().connect(self.unlock_dialog,
"uninhibit-timeout",
self.set_timeout_active, True)
# Respond to authentication success/failure
trackers.con_tracker_get().connect(self.unlock_dialog,
"authenticate-success",
self.authentication_result_callback, True)
trackers.con_tracker_get().connect(self.unlock_dialog,
"authenticate-failure",
self.authentication_result_callback, False)
trackers.con_tracker_get().connect(self.unlock_dialog,
"authenticate-cancel",
self.authentication_cancel_callback)
def setup_status_bars(self):
"""
Constructs the AudioPanel and InfoPanel and adds them to the overlay.
"""
self.audio_panel = AudioPanel()
self.add_child_widget(self.audio_panel)
self.info_panel = InfoPanel()
self.add_child_widget(self.info_panel)
self.info_panel.update_visibility()
def queue_dialog_key_event(self, event):
"""
Sent from our EventHandler via the ScreensaverManager, this catches
initial key events before the unlock dialog is made visible, so that
the user doesn't have to first jiggle the mouse to wake things up before
beginning to type their password. They can just start typing, and no
keystrokes will be lost.
"""
self.unlock_dialog.queue_key_event(event)
# Timer stuff - after a certain time, the unlock dialog will cancel itself.
# This timer is suspended during authentication, and any time a new user event is received
def reset_timeout(self):
"""
This is called when any user event is received in our EventHandler.
This restarts our dialog timeout.
"""
self.set_timeout_active(None, True)
def set_timeout_active(self, dialog, active):
"""
Start or stop the dialog timer
"""
if active and not status.InteractiveDebug:
trackers.timer_tracker_get().start("wake-timeout",
c.UNLOCK_TIMEOUT * 1000,
self.on_wake_timeout)
else:
trackers.timer_tracker_get().cancel("wake-timeout")
def on_wake_timeout(self):
"""
Go back to Sleep if we hit our timer limit
"""
self.set_timeout_active(None, False)
self.manager.cancel_unlock_widget()
return False
def authentication_result_callback(self, dialog, success):
"""
Called by authentication success or failure. Either starts
the stage despawning process or simply 'blinks' the unlock
widget, depending on the outcome.
"""
if success:
if self.clock_widget != None:
self.clock_widget.hide()
if self.albumart_widget != None:
self.albumart_widget.hide()
self.unlock_dialog.hide()
self.manager.unlock()
else:
self.unlock_dialog.blink()
def authentication_cancel_callback(self, dialog):
self.cancel_unlock_widget()
def set_message(self, msg):
"""
Passes along an away-message to the clock.
"""
if self.clock_widget != None:
self.clock_widget.set_message(msg)
def initialize_pam(self):
return self.unlock_dialog.initialize_auth_client()
def raise_unlock_widget(self):
"""
Bring the unlock widget to the front and make sure it's visible.
"""
self.reset_timeout()
if status.Awake:
return
status.screen.place_pointer_in_primary_monitor ()
utils.clear_clipboards(self.unlock_dialog)
if self.clock_widget != None:
self.clock_widget.stop_positioning()
if self.albumart_widget != None:
self.albumart_widget.stop_positioning()
status.Awake = True
if self.info_panel:
self.info_panel.refresh_power_state()
if self.clock_widget != None:
self.clock_widget.show()
if self.albumart_widget != None:
self.albumart_widget.show()
self.unlock_dialog.show()
if self.audio_panel != None:
self.audio_panel.show_panel()
if self.info_panel != None:
self.info_panel.update_visibility()
if self.osk != None:
self.osk.show()
def cancel_unlocking(self):
if self.unlock_dialog:
self.unlock_dialog.cancel_auth_client()
def cancel_unlock_widget(self):
"""
Hide the unlock widget (and others) if the unlock has been canceled
"""
if not status.Awake:
return
self.set_timeout_active(None, False)
utils.clear_clipboards(self.unlock_dialog)
self.unlock_dialog.hide()
if self.clock_widget != None:
self.clock_widget.hide()
if self.albumart_widget != None:
self.albumart_widget.hide()
if self.audio_panel != None:
self.audio_panel.hide()
if self.info_panel != None:
self.info_panel.hide()
if self.osk != None:
self.osk.hide()
self.unlock_dialog.cancel()
status.Awake = False
self.update_monitor_views()
self.info_panel.update_visibility()
def update_monitor_views(self):
"""
Updates all of our MonitorViews based on the power
or Awake states.
"""
if not status.Awake:
if self.clock_widget != None and settings.get_show_clock():
self.clock_widget.start_positioning()
if self.albumart_widget != None and settings.get_show_albumart():
self.albumart_widget.start_positioning()
for monitor in self.monitors:
monitor.show()
def destroy_monitor_views(self):
"""
Destroy all MonitorViews
"""
for monitor in self.monitors:
monitor.destroy()
del monitor
def do_motion_notify_event(self, event):
"""
GtkWidget class motion-event handler. Delegate to EventHandler
"""
return self.event_handler.on_motion_event(event)
def do_key_press_event(self, event):
"""
GtkWidget class key-press-event handler. Delegate to EventHandler
"""
return self.event_handler.on_key_press_event(event)
def do_button_press_event(self, event):
"""
GtkWidget class button-press-event handler. Delegate to EventHandler
"""
return self.event_handler.on_button_press_event(event)
def update_geometry(self):
"""
Override BaseWindow.update_geometry() - the Stage should always be the
GdkScreen size, unless status.InteractiveDebug is True
"""
if status.InteractiveDebug:
monitor_n = status.screen.get_primary_monitor()
self.rect = status.screen.get_monitor_geometry(monitor_n)
else:
self.rect = status.screen.get_screen_geometry()
if status.Debug:
print("Stage.update_geometry - new backdrop position: %d, %d new size: %d x %d" % (self.rect.x, self.rect.y, self.rect.width, self.rect.height))
hints = Gdk.Geometry()
hints.min_width = self.rect.width
hints.min_height = self.rect.height
hints.max_width = self.rect.width
hints.max_height = self.rect.height
hints.base_width = self.rect.width
hints.base_height = self.rect.height
self.set_geometry_hints(self, hints, Gdk.WindowHints.MIN_SIZE | Gdk.WindowHints.MAX_SIZE | Gdk.WindowHints.BASE_SIZE)
# Overlay window management
def get_mouse_monitor(self):
if status.InteractiveDebug:
return status.screen.get_primary_monitor()
else:
return status.screen.get_mouse_monitor()
def maybe_update_layout(self):
"""
Called on all user events, moves widgets to the currently
focused monitor if it changes (whichever monitor the mouse is in)
"""
current_focus_monitor = status.screen.get_mouse_monitor()
if self.last_focus_monitor == -1:
self.last_focus_monitor = current_focus_monitor
return
if self.unlock_dialog and current_focus_monitor != self.last_focus_monitor:
self.last_focus_monitor = current_focus_monitor
self.overlay.queue_resize()
def add_child_widget(self, widget):
"""
Add a new child to the overlay
"""
self.overlay.add_overlay(widget)
def sink_child_widget(self, widget):
"""
Move a child to the bottom of the overlay
"""
self.overlay.reorder_overlay(widget, 0)
def position_overlay_child(self, overlay, child, allocation):
"""
Callback for our GtkOverlay, think of this as a mini-
window manager for our Stage.
Depending on what type child is, we position it differently.
We always call child.get_preferred_size() whether we plan to use
it or not - this prevents allocation warning spew, particularly in
Gtk >= 3.20.
Returning True says, yes draw it. Returning False tells it to skip
drawing.
If a new widget type is introduced that spawns directly on the stage,
it must have its own handling code here.
"""
if isinstance(child, MonitorView):
"""
MonitorView is always the size and position of its assigned monitor.
This is calculated and stored by the child in child.rect)
"""
w, h = child.get_preferred_size()
allocation.x = child.rect.x
allocation.y = child.rect.y
allocation.width = child.rect.width
allocation.height = child.rect.height
return True
if isinstance(child, UnlockDialog):
"""
UnlockDialog always shows on the currently focused monitor (the one the
mouse is currently in), and is kept centered.
"""
monitor = status.screen.get_mouse_monitor()
monitor_rect = status.screen.get_monitor_geometry(monitor)
min_rect, nat_rect = child.get_preferred_size()
allocation.width = nat_rect.width
allocation.height = nat_rect.height
allocation.x = monitor_rect.x + (monitor_rect.width / 2) - (allocation.width / 2)
allocation.y = monitor_rect.y + (monitor_rect.height / 2) - (allocation.height / 2)
return True
if isinstance(child, ClockWidget) or isinstance(child, AlbumArt):
"""
ClockWidget and AlbumArt behave differently depending on if status.Awake is True or not.
The widgets' halign and valign properties are used to store their gross position on the
monitor. This limits the number of possible positions to (3 * 3 * n_monitors) when our
screensaver is not Awake, and the widgets have an internal timer that randomizes halign,
valign, and current monitor every so many seconds, calling a queue_resize on itself after
each timer tick (which forces this function to run).
"""
min_rect, nat_rect = child.get_preferred_size()
if status.Awake:
current_monitor = status.screen.get_mouse_monitor()
else:
current_monitor = child.current_monitor
monitor_rect = status.screen.get_monitor_geometry(current_monitor)
region_w = monitor_rect.width / 3
region_h = monitor_rect.height
if status.Awake:
"""
If we're Awake, force the clock to track to the active monitor, and be aligned to
the left-center. The albumart widget aligns right-center.
"""
unlock_mw, unlock_nw = self.unlock_dialog.get_preferred_width()
"""
If, for whatever reason, we need more than 1/3 of the screen to fully display
the unlock dialog, reduce our available region width to accomodate it, reducing
the allocation for the floating widgets as required.
"""
if (unlock_nw > region_w):
region_w = (monitor_rect.width - unlock_nw) / 2
region_h = monitor_rect.height
if isinstance(child, ClockWidget):
child.set_halign(Gtk.Align.START)
else:
child.set_halign(Gtk.Align.END)
child.set_valign(Gtk.Align.CENTER)
else:
if settings.get_allow_floating():
for floater in self.floaters:
"""
Don't let our floating widgets end up in the same spot.
"""
if floater is child:
continue
if floater.get_halign() != child.get_halign() and floater.get_valign() != child.get_valign():
continue
region_h = monitor_rect.height / 3
fa = floater.get_halign()
ca = child.get_halign()
while fa == ca:
ca = ALIGNMENTS[random.randint(0, 2)]
child.set_halign(ca)
fa = floater.get_valign()
ca = child.get_valign()
while fa == ca:
ca = ALIGNMENTS[random.randint(0, 2)]
child.set_valign(ca)
# Restrict the widget size to the allowable region sizes if necessary.
allocation.width = min(nat_rect.width, region_w)
allocation.height = min(nat_rect.height, region_h)
# Calculate padding required to center widgets within their particular 1/9th of the monitor
padding_left = padding_right = (region_w - allocation.width) / 2
padding_top = padding_bottom = (region_h - allocation.height) / 2
halign = child.get_halign()
valign = child.get_valign()
if halign == Gtk.Align.START:
allocation.x = monitor_rect.x + padding_left
elif halign == Gtk.Align.CENTER:
allocation.x = monitor_rect.x + (monitor_rect.width / 2) - (allocation.width / 2)
elif halign == Gtk.Align.END:
allocation.x = monitor_rect.x + monitor_rect.width - allocation.width - padding_right
if valign == Gtk.Align.START:
allocation.y = monitor_rect.y + padding_top
elif valign == Gtk.Align.CENTER:
allocation.y = monitor_rect.y + (monitor_rect.height / 2) - (allocation.height / 2)
elif valign == Gtk.Align.END:
allocation.y = monitor_rect.y + monitor_rect.height - allocation.height - padding_bottom
return True
if isinstance(child, AudioPanel):
"""
The AudioPanel is only shown when Awake, and attaches
itself to the upper-left corner of the active monitor.
"""
min_rect, nat_rect = child.get_preferred_size()
if status.Awake:
current_monitor = status.screen.get_mouse_monitor()
monitor_rect = status.screen.get_monitor_geometry(current_monitor)
allocation.x = monitor_rect.x
allocation.y = monitor_rect.y
allocation.width = nat_rect.width
allocation.height = nat_rect.height
else:
allocation.x = child.rect.x
allocation.y = child.rect.y
allocation.width = nat_rect.width
allocation.height = nat_rect.height
return True
if isinstance(child, InfoPanel):
"""
The InfoPanel can be shown while not Awake, but will only appear if a) We have received
notifications while the screensaver is running, or b) we're either on battery
or plugged in but with a non-full battery. It attaches itself to the upper-right
corner of the monitor.
"""
min_rect, nat_rect = child.get_preferred_size()
if status.Awake:
current_monitor = status.screen.get_mouse_monitor()
monitor_rect = status.screen.get_monitor_geometry(current_monitor)
allocation.x = monitor_rect.x + monitor_rect.width - nat_rect.width
allocation.y = monitor_rect.y
allocation.width = nat_rect.width
allocation.height = nat_rect.height
else:
allocation.x = child.rect.x + child.rect.width - nat_rect.width
allocation.y = child.rect.y
allocation.width = nat_rect.width
allocation.height = nat_rect.height
return True
if isinstance(child, OnScreenKeyboard):
"""
The InfoPanel can be shown while not Awake, but will only appear if a) We have received
notifications while the screensaver is running, or b) we're either on battery
or plugged in but with a non-full battery. It attaches itself to the upper-right
corner of the monitor.
"""
min_rect, nat_rect = child.get_preferred_size()
current_monitor = status.screen.get_mouse_monitor()
monitor_rect = status.screen.get_monitor_geometry(current_monitor)
allocation.x = monitor_rect.x
allocation.y = monitor_rect.y + monitor_rect.height - (monitor_rect.height / 3)
allocation.width = monitor_rect.width
allocation.height = monitor_rect.height / 3
return True
return False
|
As sweet and cozy as she wants to be! Made from our soft like a cloud Jersey cotton, the Mountain Trees Tunic is ideal for all your girl’s activities. Designed with everyday play in mind, it features a long-sleeve cut with crew neck, high-low design (so cute!) and a sublime coniferous all-over print. Match it with printed leggings and our snug-fitting beanie for the cooler days and nights.
|
import time
import datetime
from app import db, utils
from app.models import Post, Message
from flask import redirect, request
from flask_login import current_user
from flask_wtf import Form
from wtforms import validators, StringField, TextAreaField, HiddenField
class NewPostForm(Form):
title = StringField('Title:', validators=[validators.DataRequired(), validators.Length(min=0, max=1000)])
body = TextAreaField('Body:', validators=[validators.Length(min=0, max=30000)], widget=utils.TinyMCE)
bodyhtml = HiddenField()
def validate(self):
is_valid = Form.validate(self)
self.body.data = self.bodyhtml.data # preserve what has already been entered
return is_valid
def new_post():
form = NewPostForm()
if form.validate_on_submit():
data = {"title": form.title.data,
"body": form.bodyhtml.data,
"author": current_user.id_,
"timestamp": datetime.datetime.now()}
newpost = Post(**data)
db.session.add(newpost)
db.session.commit()
time.sleep(0.5)
return redirect("/news")
return utils.render_with_navbar("post/form.html", form=form, heading="News Item")
def new_message():
form = NewPostForm()
if form.validate_on_submit():
data = {"title": form.title.data,
"body": form.bodyhtml.data,
"author": current_user.id_,
"timestamp": datetime.datetime.now()}
newpost = Message(**data)
db.session.add(newpost)
db.session.commit()
time.sleep(0.5)
return redirect("/message")
return utils.render_with_navbar("post/form.html", form=form, heading="Principal's Message")
def edit_post():
postid = request.args.get("postid")
if not postid:
return redirect("/newpost")
current_post = Post.query.filter_by(id_=postid).first()
if not current_post:
return redirect("/newpost")
data = {"title": current_post.title,
"body": current_post.body}
form = NewPostForm(**data)
if form.validate_on_submit():
new_data = {"title": form.title.data,
"body": form.body.data}
for key, value in new_data.items():
setattr(current_post, key, value)
db.session.commit()
time.sleep(0.5)
return redirect("/news?postid="+postid)
return utils.render_with_navbar("post/form.html", form=form, heading="News Item")
def edit_message():
postid = request.args.get("postid")
if not postid:
return redirect("/messages")
current_post = Message.query.filter_by(id_=postid).first()
if not current_post:
return redirect("/messages")
data = {"title": current_post.title,
"body": current_post.body}
form = NewPostForm(**data)
if form.validate_on_submit():
new_data = {"title": form.title.data,
"body": form.body.data}
for key, value in new_data.items():
setattr(current_post, key, value)
db.session.commit()
time.sleep(0.5)
return redirect("/messages?postid="+postid)
return utils.render_with_navbar("post/form.html", form=form, heading="Principal's Message")
def delete_post():
postid = request.args.get("postid")
if not postid:
return redirect("/news")
post = Post.query.filter_by(id_=postid)
post.delete()
db.session.commit()
time.sleep(0.5)
return redirect("/news")
def delete_message():
postid = request.args.get("postid")
if not postid:
return redirect("/messages")
post = Message.query.filter_by(id_=postid)
post.delete()
db.session.commit()
time.sleep(0.5)
return redirect("/messages")
|
1. From I-25 exit 221, west on 104th Ave (.5 miles) to Huron St.
2. North (right) on Huron St (1 mile) to 112th Ave/Community Center Dr.
3. West (left) on 112th Ave/Community Center Dr (.6 miles) to Northwest Open Space.
4. Northwest Open Space is on the south side.
1. From I-25 exit 221, west on 104th Ave (1.1 miles) to Quivas St.
2. North (right) on Quivas St (.2 miles) at which point it becomes Pecos St.
3. Continue on Pecos St (.2 miles) and then take a left and go to the end of the parking lot.
1. From Boulder, southeast on US-36/Den-Bldr Tpk (8.5 miles), to the US-287/CO-121/Broomfield exit (Broomfield streets not shown on map).
2. North (left) on CO-121 (.4 miles), be in the left lane and merge onto 120th Ave/US-287 South.
3. East on 120th Ave/US-287 South (approx 3.2 miles) to Federal Blvd.
4. South (right) on Federal Blvd (.6 miles) to 112th Ave/Community Center Dr.
5. East (left) on 112th Ave/Community Center Dr (.7 miles) to Northwest Open Space.
6. Northwest Open Space is on the south side.
1. From I-76 exit 269B, north on Federal Blvd (6.8 miles) to 112th Ave/Community Center Dr.
2. East (right) on 112th Ave/Community Center Dr (.7 miles) to Northwest Open Space.
|
from recoDataStructure import *
class DataReceiver:
"""This class helps us to read data into the program.
During the training stage, it can read data from file
and during recognition stage, it can get real time tracking data and
pass it to the Feature Extraction module."""
def __init__(self, l_or_r):
# 0 or 1, whether it's for the left or right hand
self._l_or_r = l_or_r
# data structure for real time training or recognition
self._gloveData = None
# data structure for training from file
self._gloveDataList = list()
def readDataFromFile(self, filePath):
"""Read a sample file and create a list of ARTGlove data samples"""
# read the file into a list
f = open(filePath, 'r')
lines = f.readlines()
f.close()
print(len(lines), "are read")
# create glove data and add it into the glove data list
indice = 0
limit = len(lines)
print(limit)
n = 0
while indice + 53 <= limit:
glove = self.createGloveFromFile(lines[indice:indice+53])
n += 1
self._gloveDataList.append(glove)
indice += 53
print(n,"samples are created.")
def createFingerFromFile(self, n, lines):
"""Function called by the createGloveFromFile function"""
pos_str = lines[0][0:-1].split(' ')
pos = list()
for p in pos_str:
pos.append(float(p))
ori_str = lines[1][0:-1] + ' ' + lines[2][0:-1] + ' ' + lines[3][0:-1]
ori_str = ori_str.split(' ')
ori = list()
for o in ori_str:
ori.append(float(o))
phalen_str = lines[5][0:-1].split(' ')
phalen = list()
for p in phalen_str:
phalen.append(float(p))
#print("lines[6]:",lines[6])
phaang_str = lines[6][0:-1].split(' ')
phaang = list()
for p in phaang_str:
phaang.append(float(p))
f = Finger(n, pos, ori, float(lines[4][0:-1]), phalen, phaang)
return f
def createGloveFromFile(self, lines):
"""Function called by the readDataFromFile function"""
pos_str = lines[5][0:-1].split(' ')
pos = list()
for p in pos_str:
pos.append(float(p))
ori_str = lines[6][0:-1] + ' ' + lines[7][0:-1] + ' ' + lines[8][0:-1]
ori_str = ori_str.split(' ')
ori = list()
for o in ori_str:
ori.append(float(o))
finger_name_list = ['pouce','index','majeur','annulaire','auriculaire']
i = 11
n = 0
fingers = list()
while n < 5:
fingers.append(self.createFingerFromFile(finger_name_list[n],lines[i+n*8:i+7+n*8]))
n += 1
lr = -1
if lines[3][0:-1] == 'left':
lr = 0
else:
lr = 1
g = Glove(lines[1][0:-1], 0, lines[2][0:-1], lr, int(lines[4][0:-1]), fingers, pos, ori)
return g
def readRealTimeData(self, g_frame):
""" Add a glove frame to pass later to the feature extractor """
for glove in g_frame._glove_list:
if glove._l_or_r == 1:
# use only right hand for now
self._gloveData = glove
def getOneSampleFrameFile(self):
"""Data from file, return the first data frame in the list"""
if len(self._gloveDataList) != 0:
return self._gloveDataList.pop(0)
else:
return None
def getOneSampleFrameRT(self):
return self._gloveData
def showGlovesFromFile(self):
for g in self._gloveDataList:
print(g._timestamp)
def getGloveNumberFromFile(self):
"""Return the number of samples that we create from file"""
return len(self._gloveDataList)
if __name__ == "__main__":
dr_left = DataReceiver(0)
dr_right = DataReceiver(1)
dr_left.readDataFromFile("data/final_dataset2.txt")
dr_right.readDataFromFile("data/final_dataset2.txt")
print("finish for left hand", dr_left.getGloveNumberFromFile())
print("finish for right hand", dr_right.getGloveNumberFromFile())
|
Adds a category series item, specified by its from and to values.
The from value of the item.
The to value of the item.
|
#!/usr/bin/python
CIPHERNAMES = set(('aes-128-ctr',))
import os
import sys
if sys.platform not in ('darwin',):
import pyelliptic
else:
# FIX PATH ON OS X ()
# https://github.com/yann2192/pyelliptic/issues/11
_openssl_lib_paths = ['/usr/local/Cellar/openssl/']
for p in _openssl_lib_paths:
if os.path.exists(p):
p = os.path.join(p, os.listdir(p)[-1], 'lib')
os.environ['DYLD_LIBRARY_PATH'] = p
import pyelliptic
if CIPHERNAMES.issubset(set(pyelliptic.Cipher.get_all_cipher())):
break
if 'pyelliptic' not in dir() or not CIPHERNAMES.issubset(set(pyelliptic.Cipher.get_all_cipher())):
print 'required ciphers %r not available in openssl library' % CIPHERNAMES
if sys.platform == 'darwin':
print 'use homebrew or macports to install newer openssl'
print '> brew install openssl / > sudo port install openssl'
sys.exit(1)
import bitcoin
from sha3 import sha3_256
from hashlib import sha256
import struct
import random
import devp2p.utils as utils
try:
from ecdsa_recover import ecdsa_raw_sign, ecdsa_raw_verify, ecdsa_raw_recover
from ecdsa_recover import ecdsa_sign, ecdsa_verify
except:
ecdsa_raw_sign = bitcoin.ecdsa_raw_sign
ecdsa_raw_verify = bitcoin.ecdsa_raw_verify
ecdsa_raw_recover = bitcoin.ecdsa_raw_recover
ecdsa_sign = bitcoin.ecdsa_sign
ecdsa_verify = bitcoin.ecdsa_verify
hmac_sha256 = pyelliptic.hmac_sha256
class ECIESDecryptionError(Exception):
pass
class ECCx(pyelliptic.ECC):
"""
Modified to work with raw_pubkey format used in RLPx
and binding default curve and cipher
"""
ecies_ciphername = 'aes-128-ctr'
curve = 'secp256k1'
ecies_encrypt_overhead_length = 113
def __init__(self, raw_pubkey=None, raw_privkey=None):
if raw_privkey:
assert not raw_pubkey
raw_pubkey = privtopub(raw_privkey)
if raw_pubkey:
assert len(raw_pubkey) == 64
_, pubkey_x, pubkey_y, _ = self._decode_pubkey(raw_pubkey)
else:
pubkey_x, pubkey_y = None, None
while True:
pyelliptic.ECC.__init__(self, pubkey_x=pubkey_x, pubkey_y=pubkey_y,
raw_privkey=raw_privkey, curve=self.curve)
try:
if self.raw_privkey:
bitcoin.get_privkey_format(self.raw_privkey) # failed for some keys
valid_priv_key = True
except AssertionError:
valid_priv_key = False
if len(self.raw_pubkey) == 64 and valid_priv_key:
break
elif raw_privkey or raw_pubkey:
raise Exception('invalid priv or pubkey')
assert len(self.raw_pubkey) == 64
@property
def raw_pubkey(self):
return self.pubkey_x + self.pubkey_y
@classmethod
def _decode_pubkey(cls, raw_pubkey):
assert len(raw_pubkey) == 64
pubkey_x = raw_pubkey[:32]
pubkey_y = raw_pubkey[32:]
return cls.curve, pubkey_x, pubkey_y, 64
def get_ecdh_key(self, raw_pubkey):
"Compute public key with the local private key and returns a 256bits shared key"
_, pubkey_x, pubkey_y, _ = self._decode_pubkey(raw_pubkey)
key = self.raw_get_ecdh_key(pubkey_x, pubkey_y)
assert len(key) == 32
return key
@property
def raw_privkey(self):
return self.privkey
def is_valid_key(self, raw_pubkey, raw_privkey=None):
try:
assert len(raw_pubkey) == 64
failed = bool(self.raw_check_key(raw_privkey, raw_pubkey[:32], raw_pubkey[32:]))
except (AssertionError, Exception):
failed = True
return not failed
@classmethod
def ecies_encrypt(cls, data, raw_pubkey):
"""
ECIES Encrypt, where P = recipient public key is:
1) generate r = random value
2) generate shared-secret = kdf( ecdhAgree(r, P) )
3) generate R = rG [same op as generating a public key]
4) send 0x04 || R || AsymmetricEncrypt(shared-secret, plaintext) || tag
currently used by go:
ECIES_AES128_SHA256 = &ECIESParams{
Hash: sha256.New,
hashAlgo: crypto.SHA256,
Cipher: aes.NewCipher,
BlockSize: aes.BlockSize,
KeyLen: 16,
}
"""
# 1) generate r = random value
ephem = ECCx()
# 2) generate shared-secret = kdf( ecdhAgree(r, P) )
key_material = ephem.raw_get_ecdh_key(pubkey_x=raw_pubkey[:32], pubkey_y=raw_pubkey[32:])
assert len(key_material) == 32
key = eciesKDF(key_material, 32)
assert len(key) == 32
key_enc, key_mac = key[:16], key[16:]
key_mac = sha256(key_mac).digest() # !!!
assert len(key_mac) == 32
# 3) generate R = rG [same op as generating a public key]
ephem_pubkey = ephem.raw_pubkey
# encrypt
iv = pyelliptic.Cipher.gen_IV(cls.ecies_ciphername)
assert len(iv) == 16
ctx = pyelliptic.Cipher(key_enc, iv, 1, cls.ecies_ciphername)
ciphertext = ctx.ciphering(data)
assert len(ciphertext) == len(data)
# 4) send 0x04 || R || AsymmetricEncrypt(shared-secret, plaintext) || tag
msg = chr(0x04) + ephem_pubkey + iv + ciphertext
# the MAC of a message (called the tag) as per SEC 1, 3.5.
tag = hmac_sha256(key_mac, msg[1 + 64:])
assert len(tag) == 32
msg += tag
assert len(msg) == 1 + 64 + 16 + 32 + len(data) == 113 + len(data)
assert len(msg) - cls.ecies_encrypt_overhead_length == len(data)
return msg
def ecies_decrypt(self, data):
"""
Decrypt data with ECIES method using the local private key
ECIES Decrypt (performed by recipient):
1) generate shared-secret = kdf( ecdhAgree(myPrivKey, msg[1:65]) )
2) verify tag
3) decrypt
ecdhAgree(r, recipientPublic) == ecdhAgree(recipientPrivate, R)
[where R = r*G, and recipientPublic = recipientPrivate*G]
"""
if data[0] != chr(0x04):
raise ECIESDecryptionError("wrong ecies header")
# 1) generate shared-secret = kdf( ecdhAgree(myPrivKey, msg[1:65]) )
_shared = data[1:1 + 64]
# FIXME, check that _shared_pub is a valid one (on curve)
key_material = self.raw_get_ecdh_key(pubkey_x=_shared[:32], pubkey_y=_shared[32:])
assert len(key_material) == 32
key = eciesKDF(key_material, 32)
assert len(key) == 32
key_enc, key_mac = key[:16], key[16:]
key_mac = sha256(key_mac).digest()
assert len(key_mac) == 32
tag = data[-32:]
assert len(tag) == 32
# 2) verify tag
if not pyelliptic.equals(hmac_sha256(key_mac, data[1 + 64:- 32]), tag):
raise ECIESDecryptionError("Fail to verify data")
# 3) decrypt
blocksize = pyelliptic.OpenSSL.get_cipher(self.ecies_ciphername).get_blocksize()
iv = data[1 + 64:1 + 64 + blocksize]
assert len(iv) == 16
ciphertext = data[1 + 64 + blocksize:- 32]
assert 1 + len(_shared) + len(iv) + len(ciphertext) + len(tag) == len(data)
ctx = pyelliptic.Cipher(key_enc, iv, 0, self.ecies_ciphername)
return ctx.ciphering(ciphertext)
encrypt = ecies_encrypt
decrypt = ecies_decrypt
def sign(self, data):
"""
pyelliptic.ECC.sign is DER-encoded
https://bitcoin.stackexchange.com/questions/12554
"""
signature = ecdsa_sign(data, self.raw_privkey)
assert len(signature) == 65
return signature
def verify(self, signature, message):
assert len(signature) == 65
return ecdsa_verify(self.raw_pubkey, signature, message)
def lzpad32(x):
return '\x00' * (32 - len(x)) + x
def _encode_sig(v, r, s):
assert isinstance(v, (int, long))
assert v in (27, 28)
vb, rb, sb = chr(v - 27), bitcoin.encode(r, 256), bitcoin.encode(s, 256)
return lzpad32(rb) + lzpad32(sb) + vb
def _decode_sig(sig):
return ord(sig[64]) + 27, bitcoin.decode(sig[0:32], 256), bitcoin.decode(sig[32:64], 256)
def ecdsa_verify(pubkey, signature, message):
assert len(signature) == 65
assert len(pubkey) == 64
return ecdsa_raw_verify(message, _decode_sig(signature), pubkey)
verify = ecdsa_verify
def ecdsa_sign(message, privkey):
s = _encode_sig(*ecdsa_raw_sign(message, privkey))
return s
sign = ecdsa_sign
def ecdsa_recover(message, signature):
assert len(signature) == 65
pub = ecdsa_raw_recover(message, _decode_sig(signature))
assert pub, 'pubkey could not be recovered'
pub = bitcoin.encode_pubkey(pub, 'bin_electrum')
assert len(pub) == 64
return pub
recover = ecdsa_recover
def sha3(seed):
return sha3_256(seed).digest()
def mk_privkey(seed):
return sha3(seed)
def privtopub(raw_privkey):
raw_pubkey = bitcoin.encode_pubkey(bitcoin.privtopub(raw_privkey), 'bin_electrum')
assert len(raw_pubkey) == 64
return raw_pubkey
def encrypt(data, raw_pubkey):
"""
Encrypt data with ECIES method using the public key of the recipient.
"""
assert len(raw_pubkey) == 64, 'invalid pubkey of len {}'.format(len(raw_pubkey))
return ECCx.encrypt(data, raw_pubkey)
def eciesKDF(key_material, key_len):
"""
interop w/go ecies implementation
for sha3, blocksize is 136 bytes
for sha256, blocksize is 64 bytes
NIST SP 800-56a Concatenation Key Derivation Function (see section 5.8.1).
"""
s1 = ""
key = ""
hash_blocksize = 64
reps = ((key_len + 7) * 8) / (hash_blocksize * 8)
counter = 0
while counter <= reps:
counter += 1
ctx = sha256()
ctx.update(struct.pack('>I', counter))
ctx.update(key_material)
ctx.update(s1)
key += ctx.digest()
return key[:key_len]
|
Specially dedicated programs are responsible for launching a .wsdl file saved in a specific format (which we can recognize on the basis of the extension of a given file). The most common problems with .wsdl files downloaded or received via e-mail are: their incorrect association with programs in the registry, or simply the lack of an appropriate program to open them. To solve the problem with the .wsdl file it is usually sufficient just to download the appropriate software that supports .wsdl file format, which can be found in the table below. The most common file format with the .wsdl extension belongs to the category Developer Files . Microsoft Corporation is responsible for creating .wsdl extension file.
|
#!/usr/bin/env python3
from random import randint
from . import EmulationError
from .constants.reg_rom_stack import STACK_ADDRESS, STACK_SIZE
from .constants.graphics import GFX_FONT_ADDRESS, GFX_RESOLUTION, GFX_ADDRESS, \
GFX_WIDTH, GFX_HEIGHT_PX, GFX_WIDTH_PX, \
SET_VF_ON_GFX_OVERFLOW
# Instructions - All 20 mnemonics, 35 total instructions
# Add-3 SE-2 SNE-2 LD-11 JP-2 (mnemonics w/ extra instructions)
def i_cls(emu):
emu.ram[GFX_ADDRESS:GFX_ADDRESS + GFX_RESOLUTION] = [0x00] * GFX_RESOLUTION
emu.draw_flag = True
def i_ret(emu):
emu.stack_pointer -= 1
if emu.stack_pointer < 0:
emu.log("Stack underflow", EmulationError._Fatal)
emu.program_counter = emu.stack.pop()
def i_sys(emu):
emu.log("RCA 1802 call to " + hex( get_address(emu) ) + " was ignored.", EmulationError._Warning)
def i_call(emu):
if STACK_ADDRESS:
emu.ram[stack_pointer] = emu.program_counter
emu.stack_pointer += 1
emu.stack.append(emu.program_counter)
if emu.stack_pointer > STACK_SIZE:
emu.log("Stack overflow. Stack is now size " + emu.stack_pointer, EmulationError._Warning)
emu.program_counter = get_address(emu) - 2
def i_skp(emu):
if emu.keypad[ get_reg1_val(emu) & 0x0F ]:
emu.program_counter += 2
def i_sknp(emu):
if not emu.keypad[ get_reg1_val(emu) & 0x0F ]:
emu.program_counter += 2
def i_se(emu):
comp = get_lower_byte(emu) if 'byte' is emu.dis_ins.mnemonic_arg_types[1] else get_reg2_val(emu)
if get_reg1_val(emu) == comp:
emu.program_counter += 2
def i_sne(emu):
comp = get_lower_byte(emu) if 'byte' is emu.dis_ins.mnemonic_arg_types[1] else get_reg2_val(emu)
if get_reg1_val(emu) != comp:
emu.program_counter += 2
def i_shl(emu):
if emu.legacy_shift:
emu.register[0xF] = 0x01 if get_reg2_val(emu) >= 0x80 else 0x0
emu.register[ get_reg1(emu) ] = ( get_reg2_val(emu) << 1 ) & 0xFF
else:
emu.register[0xF] = 0x01 if get_reg1_val(emu) >= 0x80 else 0x0
emu.register[ get_reg1(emu) ] = ( get_reg1_val(emu) << 1 ) & 0xFF
def i_shr(emu):
if emu.legacy_shift:
emu.register[0xF] = 0x01 if ( get_reg2_val(emu) % 2) == 1 else 0x0
emu.register[ get_reg1(emu) ] = get_reg2_val(emu) >> 1
else:
emu.register[0xF] = 0x01 if ( get_reg1_val(emu) % 2) == 1 else 0x0
emu.register[ get_reg1(emu) ] = get_reg1_val(emu) >> 1
def i_or(emu):
emu.register[ get_reg1(emu) ] = get_reg1_val(emu) | get_reg2_val(emu)
def i_and(emu):
emu.register[ get_reg1(emu) ] = get_reg1_val(emu) & get_reg2_val(emu)
def i_xor(emu):
emu.register[ get_reg1(emu) ] = get_reg1_val(emu) ^ get_reg2_val(emu)
def i_sub(emu):
emu.register[0xF] = 0x01 if get_reg1_val(emu) >= get_reg2_val(emu) else 0x00
emu.register[ get_reg1(emu) ] = get_reg1_val(emu) - get_reg2_val(emu)
emu.register[ get_reg1(emu) ] &= 0xFF
def i_subn(emu):
emu.register[0xF] = 0x01 if get_reg2_val(emu) >= get_reg1_val(emu) else 0x00
emu.register[ get_reg1(emu) ] = get_reg2_val(emu) - get_reg1_val(emu)
emu.register[ get_reg1(emu) ] &= 0xFF
def i_jp(emu):
init_pc = emu.program_counter
numb_args = len(emu.dis_ins.mnemonic_arg_types)
if 'v0' is emu.dis_ins.mnemonic_arg_types[0] and numb_args == 2:
emu.program_counter = get_address(emu) + emu.register[0] - 2
elif numb_args == 1:
emu.program_counter = get_address(emu) - 2
else:
emu.log("Unknown argument at address " + hex(emu.program_counter), EmulationError._Fatal)
if init_pc == emu.program_counter + 2:
emu.spinning = True
def i_rnd(emu):
emu.register[ get_reg1(emu) ] = randint(0, 255) & get_lower_byte(emu)
def i_add(emu):
arg1 = emu.dis_ins.mnemonic_arg_types[0]
arg2 = emu.dis_ins.mnemonic_arg_types[1]
if 'reg' is arg1:
if 'byte' is arg2:
emu.register[ get_reg1(emu) ] = get_reg1_val(emu) + get_lower_byte(emu)
emu.register[ get_reg1(emu) ] &= 0xFF
elif 'reg' is arg2:
emu.register[ get_reg1(emu) ] = get_reg1_val(emu) + get_reg2_val(emu)
emu.register[0xF] = 0x01 if emu.register[ get_reg1(emu) ] > 0xFF else 0x00
emu.register[ get_reg1(emu) ] &= 0xFF
else:
emu.log("Unknown argument at address " + hex(emu.program_counter), EmulationError._Fatal)
elif 'i' in arg1 and 'reg' is arg2:
emu.index_register += get_reg1_val(emu)
if (emu.index_register > 0xFF) and SET_VF_ON_GFX_OVERFLOW:
emu.register[0xF] = 0x01
emu.index_register &= 0xFFF
else:
emu.log("Unknown argument at address " + hex(emu.program_counter), EmulationError._Fatal)
def i_ld(emu):
arg1 = emu.dis_ins.mnemonic_arg_types[0]
arg2 = emu.dis_ins.mnemonic_arg_types[1]
if 'reg' is arg1:
if 'byte' is arg2:
emu.register[ get_reg1(emu) ] = get_lower_byte(emu)
elif 'reg' is arg2:
emu.register[ get_reg1(emu) ] = get_reg2_val(emu)
elif 'dt' is arg2:
emu.register[ get_reg1(emu) ] = emu.delay_timer_register
elif 'k' is arg2:
emu.waiting_for_key = True
emu.program_counter -= 2
elif '[i]' == arg2:
emu.register[0: get_reg1(emu) + 1] = emu.ram[ emu.index_register : emu.index_register + get_reg1(emu) + 1]
else:
emu.log("Loads with second argument type '" + arg2 + \
"' are not supported.", EmulationError._Fatal)
elif 'reg' is arg2:
if 'dt' is arg1:
emu.delay_timer_register = get_reg1_val(emu)
elif 'st' is arg1:
emu.sound_timer_register = get_reg1_val(emu)
elif 'f' is arg1:
emu.index_register = GFX_FONT_ADDRESS + ( 5 * get_reg1_val(emu) )
elif 'b' is arg1:
bcd = [int(f) for f in list(str( get_reg1_val(emu) ).zfill(3))]
emu.ram[ emu.index_register : emu.index_register + len(bcd)] = bcd
elif '[i]' == arg1:
emu.ram[ emu.index_register : emu.index_register + get_reg1(emu) + 1] = emu.register[0: get_reg1(emu) + 1]
else:
emu.log("Unknown argument at address " + hex(emu.program_counter), EmulationError._Fatal)
elif 'i' is arg1 and 'addr' is arg2:
emu.index_register = get_address(emu)
else:
emu.log("Unknown argument at address " + hex(emu.program_counter), EmulationError._Fatal)
def i_drw(emu):
emu.draw_flag = True
height = int(emu.dis_ins.hex_instruction[3],16)
x_origin_byte = int( get_reg1_val(emu) / 8 ) % GFX_WIDTH
y_origin_byte = (get_reg2_val(emu) % GFX_HEIGHT_PX) * GFX_WIDTH
shift_amount = get_reg1_val(emu) % GFX_WIDTH_PX % 8
next_byte_offset = 1 if x_origin_byte + 1 != GFX_WIDTH else 1-GFX_WIDTH
emu.register[0xF] = 0x00
for y in range(height):
sprite = emu.ram[ emu.index_register + y ] << (8-shift_amount)
working_bytes = (
GFX_ADDRESS + (( x_origin_byte + y_origin_byte + (y * GFX_WIDTH) ) % GFX_RESOLUTION) ,
GFX_ADDRESS + (( x_origin_byte + y_origin_byte + (y * GFX_WIDTH) + next_byte_offset ) % GFX_RESOLUTION)
)
original = ( emu.ram[ working_bytes[0] ], emu.ram[ working_bytes[1] ] )
xor = (original[0]*256 + original[1]) ^ sprite
emu.ram[ working_bytes[0] ], emu.ram[ working_bytes[1] ] = xor >> 8, xor & 0x00FF
if (bin( ( emu.ram[ working_bytes[0] ] ^ original[0] ) & original[0] ) + \
bin( ( emu.ram[ working_bytes[1] ] ^ original[1] ) & original[1] )).find('1') != -1:
emu.register[0xF] = 0x01
# # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # # #
# Hex Extraction
def get_address(emu):
return int(emu.dis_ins.hex_instruction[1:4], 16)
def get_reg1(emu):
return int(emu.dis_ins.hex_instruction[1],16)
def get_reg2(emu):
return int(emu.dis_ins.hex_instruction[2],16)
def get_reg1_val(emu):
return emu.register[int(emu.dis_ins.hex_instruction[1],16)]
def get_reg2_val(emu):
return emu.register[int(emu.dis_ins.hex_instruction[2],16)]
def get_lower_byte(emu):
return int(emu.dis_ins.hex_instruction[2:4], 16)
|
Android: If you use your Android for streaming or video playback over HDMI, you've likely run into the problem of your tablet's screen staying on unnecessarily. This runs down the battery, and there's no need to see the image on both screens. Screen Standby fixes that problem.
Screen Standby simply sets your screens brightness to zero while your streaming/playing video over HDMI. It's a simple trick but solves an annoying problem for those of us who use our Androids as a media center. To use it, just tap the button to turn off your backlight and you're set. You can still see the image you're streaming on your TV (or wherever), but your screen will be off. To get things back to normal, all you have to do is lock and unlock your device.
Screen Standby is free to download, but it does require rooting your device.
|
# -*- coding: utf-8 -*-
# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: http://doc.scrapy.org/en/latest/topics/item-pipeline.html
import scrapy
import pymysql
import hashlib
from scrapy.exceptions import DropItem
class CrawlerPipeline(object):
def __init__(self, my_settings):
self.settings = my_settings
db_host = self.settings.get('DB_HOST')
db_port = self.settings.get('DB_PORT')
db_user = self.settings.get('DB_USER')
db_pass = self.settings.get('DB_PASS')
db_db = self.settings.get('DB_DB')
db_charset = self.settings.get('DB_CHARSET')
self.conn = pymysql.connect(
host=db_host,
port=db_port,
user=db_user,
passwd=db_pass,
database=db_db,
use_unicode=True,
charset=db_charset)
self.cursor = self.conn.cursor()
@classmethod
def from_crawler(cls, crawler):
my_settings = crawler.settings
return cls(my_settings)
def process_item(self, item, spider):
url = item['url']
id = self.get_doc_id(url)
is_visited = item['is_visited'] is None and 'N' or item['is_visited']
raw = item['raw']
parsed = item['parsed']
rvrsd_domain = item['rvrsd_domain']
status = item['status']
if is_visited == "N":
sql = """
INSERT INTO DOC (id, c_time, url, is_visited, rvrsd_domain, visit_cnt)
SELECT %s, now(), %s, %s, %s, 0 FROM DUAL
WHERE NOT EXISTS (SELECT * FROM DOC WHERE id=%s)
"""
self.cursor.execute(sql, (id, url, is_visited, rvrsd_domain, id))
print("Save new URL: [%s] %s" % (id, url))
elif is_visited == "Y":
sql = """
INSERT INTO DOC (id, c_time, v_time, raw, parsed, url, is_visited, rvrsd_domain)
VALUES (%s, now(), now(), %s, %s, %s, %s, %s)
ON DUPLICATE KEY UPDATE raw = %s, is_visited = %s, parsed = %s, v_time = now(), visit_cnt = visit_cnt + 1, status = %s
"""
self.cursor.execute(sql, (id, raw, parsed, url, is_visited, rvrsd_domain, raw, is_visited, parsed, status))
print("Update URL: [%s] %s" % (id, url))
else:
print("Pass URL: [%s] %s" % (id, url))
pass
self.conn.commit()
return item
def get_doc_id(self, url):
return hashlib.md5(url.encode('utf-8')).hexdigest()[0:16]
def open_spider(self, spider):
pass
def close_spider(self, spider):
self.cursor.close()
self.conn.close()
|
The fund’s objective is to invest principally in United Kingdom equity securities. The aim of the fund is to provide investors with long-term capital growth from diversified and actively managed portfolios of securities. Unless otherwise specified in the investment objective, the income of the fund is expected to be low. The fund will invest principally (at least 70% and normally 75% in value) in equities in the markets and sectors reflected in the name of the fund and in companies established outside those markets but which derive a significant proportion of their earnings from those markets.
|
#!/usr/bin/env python3
#
# This script checks for updates to zcashd's dependencies.
#
# The SOURCE_ROOT constant specifies the location of the zcashd codebase to
# check, and the GITHUB_API_* constants specify a personal access token for the
# GitHub API, which need not have any special privileges.
#
# All dependencies must be specified inside the get_dependency_list() function
# below. A dependency is specified by:
#
# (a) A way to fetch a list of current releases.
#
# This is usually regular-expression-based parsing of GitHub tags, but
# might otherwise parse version numbers out of the project's webpage.
#
# GitHub tag regexps can be tested by specifying test cases in the third
# argument to GithubTagReleaseLister's constructor.
#
# (b) A way to fetch the currently-used version out of the source tree.
#
# This is typically parsed out of the depends/packages/*.mk files.
#
# If any dependency is found to be out-of-date, or there are un-accounted-for
# .mk files in depends/packages, this script will exit with
# a nonzero status. The latter case would suggest someone added a new dependency
# without adding a corresponding entry to get_dependency_list() below.
#
# To test the script itself, run it with --functionality-test as the only
# argument. This will exercise the full functionality of the script, but will
# only return a non-zero exit status when there's something wrong with the
# script itself, for example if a new file was added to depends/packages/ but
# wasn't added to this script.
import requests
import os
import re
import sys
import datetime
SOURCE_ROOT = os.path.join(os.path.dirname(os.path.realpath(__file__)), "..", "..")
def get_dependency_list():
dependencies = [
Dependency("bdb",
BerkeleyDbReleaseLister(),
DependsVersionGetter("bdb")),
Dependency("boost",
GithubTagReleaseLister("boostorg", "boost", "^boost-(\d+)\.(\d+)\.(\d+)$",
{ "boost-1.69.0": (1, 69, 0), "boost-1.69.0-beta1": None }),
DependsVersionGetter("boost")),
Dependency("googletest",
GithubTagReleaseLister("google", "googletest", "^release-(\d+)\.(\d+)\.(\d+)$",
{ "release-1.8.1": (1, 8, 1) }),
DependsVersionGetter("googletest")),
# libc++ matches the Clang version
Dependency("libcxx",
GithubTagReleaseLister("llvm", "llvm-project", "^llvmorg-(\d+)\.(\d+).(\d+)$",
{ "llvmorg-11.0.0": (11, 0, 0), "llvmorg-9.0.1-rc3": None}),
DependsVersionGetter("native_clang")),
Dependency("libevent",
GithubTagReleaseLister("libevent", "libevent", "^release-(\d+)\.(\d+)\.(\d+)-stable$",
{ "release-2.0.22-stable": (2, 0, 22), "release-2.1.9-beta": None }),
DependsVersionGetter("libevent")),
Dependency("libsodium",
GithubTagReleaseLister("jedisct1", "libsodium", "^(\d+)\.(\d+)\.(\d+)$",
{ "1.0.17": (1, 0, 17) }),
DependsVersionGetter("libsodium")),
# b2 matches the Boost version
Dependency("native_b2",
GithubTagReleaseLister("boostorg", "boost", "^boost-(\d+)\.(\d+)\.(\d+)$",
{ "boost-1.69.0": (1, 69, 0), "boost-1.69.0-beta1": None }),
DependsVersionGetter("boost")),
Dependency("native_ccache",
GithubTagReleaseLister("ccache", "ccache", "^v?(\d+)\.(\d+)(?:\.(\d+))?$",
{ "v3.5.1": (3, 5, 1), "v3.6": (3, 6)}),
DependsVersionGetter("native_ccache")),
Dependency("native_clang",
GithubTagReleaseLister("llvm", "llvm-project", "^llvmorg-(\d+)\.(\d+).(\d+)$",
{ "llvmorg-11.0.0": (11, 0, 0), "llvmorg-9.0.1-rc3": None}),
DependsVersionGetter("native_clang")),
Dependency("native_rust",
GithubTagReleaseLister("rust-lang", "rust", "^(\d+)\.(\d+)(?:\.(\d+))?$",
{ "1.33.0": (1, 33, 0), "0.9": (0, 9) }),
DependsVersionGetter("native_rust")),
Dependency("zeromq",
GithubTagReleaseLister("zeromq", "libzmq", "^v(\d+)\.(\d+)(?:\.(\d+))?$",
{ "v4.3.1": (4, 3, 1), "v4.2.0-rc1": None }),
DependsVersionGetter("zeromq")),
Dependency("leveldb",
GithubTagReleaseLister("google", "leveldb", "^v(\d+)\.(\d+)$",
{ "v1.13": (1, 13) }),
LevelDbVersionGetter()),
Dependency("univalue",
GithubTagReleaseLister("bitcoin-core", "univalue", "^v(\d+)\.(\d+)\.(\d+)$",
{ "v1.0.1": (1, 0, 1) }),
UnivalueVersionGetter()),
Dependency("utfcpp",
GithubTagReleaseLister("nemtrif", "utfcpp", "^v(\d+)\.(\d+)(?:\.(\d+))?$",
{ "v3.1": (3, 1), "v3.0.3": (3, 0, 3) }),
DependsVersionGetter("utfcpp"))
]
return dependencies
class GitHubToken:
def __init__(self):
token_path = os.path.join(SOURCE_ROOT, ".updatecheck-token")
try:
with open(token_path, encoding='utf8') as f:
token = f.read().strip()
self._user = token.split(":")[0]
self._password = token.split(":")[1]
except:
print("Please make sure a GitHub API token is in .updatecheck-token in the root of this repository.")
print("The format is username:hex-token.")
sys.exit(1)
def user(self):
return self.user
def password(self):
return self.password
class Version(list):
def __init__(self, version_tuple):
for part in version_tuple:
if part: # skip None's which can come from optional regexp groups
if str(part).isdigit():
self.append(int(part))
else:
self.append(part)
def __str__(self):
return '.'.join(map(str, self))
def __hash__(self):
return hash(tuple(self))
class Dependency:
def __init__(self, name, release_lister, current_getter):
self.name = name
self.release_lister = release_lister
self.current_getter = current_getter
self.cached_known_releases = None
def current_version(self):
return self.current_getter.current_version()
def known_releases(self):
if self.cached_known_releases is None:
self.cached_known_releases = sorted(self.release_lister.known_releases())
return self.cached_known_releases
def released_versions_after_current_version(self):
current_version = self.current_version()
releases_after_current = []
for release in self.known_releases():
if release > current_version:
releases_after_current.append(release)
return releases_after_current
def is_up_to_date(self):
return len(self.released_versions_after_current_version()) == 0
class GithubTagReleaseLister:
def __init__(self, org, repo, regex, testcases={}):
self.org = org
self.repo = repo
self.regex = regex
self.testcases = testcases
self.token = GitHubToken()
for tag, expected in testcases.items():
match = re.match(self.regex, tag)
if (expected and not match) or (match and not expected) or (match and Version(match.groups()) != list(expected)):
groups = str(match.groups())
raise RuntimeError("GitHub tag regex test case [" + tag + "] failed, got [" + groups + "].")
def known_releases(self):
release_versions = []
all_tags = self.all_tag_names()
# sanity check against the test cases
for tag, expected in self.testcases.items():
if tag not in all_tags:
raise RuntimeError("Didn't find expected tag [" + tag + "].")
for tag_name in all_tags:
match = re.match(self.regex, tag_name)
if match:
release_versions.append(Version(match.groups()))
return release_versions
def all_tag_names(self):
url = "https://api.github.com/repos/" + safe(self.org) + "/" + safe(self.repo) + "/git/refs/tags"
r = requests.get(url, auth=requests.auth.HTTPBasicAuth(self.token.user(), self.token.password()))
if r.status_code != 200:
raise RuntimeError("Request to GitHub tag API failed.")
json = r.json()
return list(map(lambda t: t["ref"].split("/")[-1], json))
class BerkeleyDbReleaseLister:
def known_releases(self):
url = "https://www.oracle.com/database/technologies/related/berkeleydb-downloads.html"
r = requests.get(url)
if r.status_code != 200:
raise RuntimeError("Request to Berkeley DB download directory failed.")
page = r.text
# We use a set because the search will result in duplicates.
release_versions = set()
for match in re.findall("Berkeley DB (\d+)\.(\d+)\.(\d+)\.tar.gz", page):
release_versions.add(Version(match))
if len(release_versions) == 0:
raise RuntimeError("Missing expected version from Oracle web page.")
return list(release_versions)
class DependsVersionGetter:
def __init__(self, name):
self.name = name
def current_version(self):
mk_file_path = os.path.join(SOURCE_ROOT, "depends", "packages", safe_depends(self.name) + ".mk")
mk_file = open(mk_file_path, 'r', encoding='utf8').read()
regexp_whitelist = [
"package\)_version=(\d+)\.(\d+)\.(\d+)$",
"package\)_version=(\d+)\.(\d+)$",
"package\)_version=(\d+)_(\d+)_(\d+)$",
"package\)_version=(\d+)\.(\d+)\.(\d+)([a-z])$",
# Workaround for wasi 0.9.0 preview
"package\)_version=(\d+)\.(\d+)\.(\d+)\+wasi-snapshot-preview1$",
]
current_version = None
for regexp in regexp_whitelist:
match = re.search(regexp, mk_file, re.MULTILINE)
if match:
current_version = Version(match.groups())
if not current_version:
raise RuntimeError("Couldn't parse version number from depends .mk file.")
return current_version
class LevelDbVersionGetter:
def current_version(self):
header_path = os.path.join(SOURCE_ROOT, "src", "leveldb", "include", "leveldb", "db.h")
header_contents = open(header_path, 'r', encoding='utf8').read()
match = re.search("kMajorVersion\s*=\s*(\d+);\s*.*kMinorVersion\s*=\s*(\d+);\s*$", header_contents, re.MULTILINE)
if match:
return Version(match.groups())
else:
raise RuntimeError("Couldn't parse LevelDB's version from db.h")
class UnivalueVersionGetter:
def current_version(self):
configure_path = os.path.join(SOURCE_ROOT, "src", "univalue", "configure.ac")
configure_contents = open(configure_path, 'r', encoding='utf8').read()
match = re.search("AC_INIT.*univalue.*\[(\d+)\.(\d+)\.(\d+)\]", configure_contents)
if match:
return Version(match.groups())
else:
raise RuntimeError("Couldn't parse univalue's version from its configure.ac")
class PostponedUpdates():
def __init__(self):
self.postponedlist = dict()
postponedlist_path = os.path.join(
os.path.dirname(__file__),
"postponed-updates.txt"
)
file = open(postponedlist_path, 'r', encoding='utf8')
for line in file.readlines():
stripped = re.sub('#.*$', '', line).strip()
if stripped != "":
match = re.match('^(\S+)\s+(\S+)\s+(\S+)$', stripped)
if match:
postponed_name = match.groups()[0]
postponed_version = Version(match.groups()[1].split("."))
postpone_expiration = datetime.datetime.strptime(match.groups()[2], '%Y-%m-%d')
if datetime.datetime.utcnow() < postpone_expiration:
self.postponedlist[(postponed_name, str(postponed_version))] = True
else:
raise RuntimeError("Could not parse line in postponed-updates.txt:" + line)
def is_postponed(self, name, version):
return (name, str(version)) in self.postponedlist
def safe(string):
if re.match('^[a-zA-Z0-9_-]*$', string):
return string
else:
raise RuntimeError("Potentially-dangerous string encountered.")
def safe_depends(string):
if re.match('^[a-zA-Z0-9._-]*$', string):
return string
else:
raise RuntimeError("Potentially-dangerous string encountered.")
def print_row(name, status, current_version, known_versions):
COL_FMT_LARGE = "{:<35}"
COL_FMT_SMALL = "{:<18}"
print(COL_FMT_LARGE.format(name) +
COL_FMT_SMALL.format(status) +
COL_FMT_SMALL.format(current_version) +
COL_FMT_SMALL.format(known_versions))
def main():
# Get a list of all depends-system dependencies so we can verify that we're
# checking them all for updates.
unchecked_dependencies = [f[:-3] for f in os.listdir(os.path.join(SOURCE_ROOT, "depends", "packages")) if f.endswith(".mk")]
untracked = [
# packages.mk is not a dependency, it just specifies the list of them all.
"packages",
# This package doesn't have conventional version numbers
"native_cctools"
]
print_row("NAME", "STATUS", "CURRENT VERSION", "NEWER VERSIONS")
status = 0
for dep in untracked:
print_row(dep, "skipped", "", "")
if dep in unchecked_dependencies:
unchecked_dependencies.remove(dep)
else:
print("Error: Please remove " + dep + " from the list of unchecked dependencies.")
status = 3
# Exit early so the problem is clear from the output.
if status != 0:
sys.exit(status)
deps = get_dependency_list()
postponed = PostponedUpdates()
for dependency in deps:
if dependency.name in unchecked_dependencies:
unchecked_dependencies.remove(dependency.name)
if dependency.is_up_to_date():
print_row(
dependency.name,
"up to date",
str(dependency.current_version()),
"")
else:
# The status can either be POSTPONED or OUT OF DATE depending
# on whether or not all the new versions are whitelisted.
status_text = "POSTPONED"
newver_list = "["
for newver in dependency.released_versions_after_current_version():
if postponed.is_postponed(dependency.name, newver):
newver_list += str(newver) + " (postponed),"
else:
newver_list += str(newver) + ","
status_text = "OUT OF DATE"
status = 1
newver_list = newver_list[:-1] + "]"
print_row(
dependency.name,
status_text,
str(dependency.current_version()),
newver_list
)
if len(unchecked_dependencies) > 0:
unchecked_dependencies.sort()
print("WARNING: The following dependencies are not being checked for updates by this script: " + ', '.join(unchecked_dependencies))
sys.exit(2)
if len(sys.argv) == 2 and sys.argv[1] == "--functionality-test":
print("We're only testing this script's functionality. The exit status will only be nonzero if there's a problem with the script itself.")
sys.exit(0)
if status == 0:
print("All non-Rust dependencies are up-to-date or postponed.")
elif status == 1:
print("Release is BLOCKED. There are new dependency updates that have not been postponed.")
print("""
You should also check the Rust dependencies using cargo:
cargo install cargo-outdated cargo-audit
cargo outdated
cargo audit
""")
if status == 0:
print("After checking those, you'll be ready for release! :-)")
sys.exit(status)
main()
|
The Cadillac CT5 is an executive car manufactured and marketed by Cadillac. It debuted at the 2019 New York Auto Show. It will be available in three trim levels with an optional Super Cruise semi-autonomous driving system; it will go on sale in the fall of 2019.
|
#!/usr/bin/env python
"""
main.py -- Udacity conference server-side Python App Engine
HTTP controller handlers for memcache & task queue access
$Id$
created by wesc on 2014 may 24
"""
__author__ = 'wesc+api@google.com (Wesley Chun)'
import webapp2
from google.appengine.api import app_identity
from google.appengine.api import mail
from google.appengine.api import memcache
from google.appengine.ext import ndb
from models import Conference
from models import Session
from conference import ConferenceApi
MEMCACHE_SPEAKER_KEY = 'SPEAKER'
SPEAKER_TPL = 'More sessions from %s: %s.'
class SetAnnouncementHandler(webapp2.RequestHandler):
def get(self):
"""Set Announcement in Memcache."""
ConferenceApi._cacheAnnouncement()
self.response.set_status(204)
class SendConfirmationEmailHandler(webapp2.RequestHandler):
def post(self):
"""Send email confirming Conference creation."""
mail.send_mail(
'noreply@%s.appspotmail.com' % (
app_identity.get_application_id()), # from
self.request.get('email'), # to
'You created a new Conference!', # subj
'Hi, you have created a following ' # body
'conference:\r\n\r\n%s' % self.request.get(
'conferenceInfo')
)
class SetFeatureSpeakerHandler(webapp2.RequestHandler):
def post(self):
"""Sets the featured speaker in memcache"""
# Retrieves a list of sessions from the same speaker at this conference
p_key = ndb.Key(urlsafe=self.request.get('websafeConferenceKey'))
sessions_by_speaker = Session.query(ancestor=p_key)\
.filter(Session.speaker == self.request.get('speaker'))
if sessions_by_speaker.count() > 0:
sessions_str = ''
for session in sessions_by_speaker:
sessions_str += session.name + ', '
sessions_str = sessions_str[:-2]
speaker_memcache_message = SPEAKER_TPL % (self.request.get('speaker'), sessions_str)
memcache.set(MEMCACHE_SPEAKER_KEY, speaker_memcache_message)
app = webapp2.WSGIApplication([
('/crons/set_announcement', SetAnnouncementHandler),
('/tasks/send_confirmation_email', SendConfirmationEmailHandler),
('/tasks/setFeaturedSpeaker', SetFeatureSpeakerHandler),
], debug=True)
|
A pair of tryouts have been set aside for this USL side to see who would like to kick it with them this season. Jan 28 @ 1:30 p.m. - 3:30 and Tues. Jan. 29 from 10:30 a.m. - 12:30 p.m., again on Sat. Apr. 27 running 3-4:30 p.m, then Sun. Apr. 28 stating 9 a.m.-10:30 a.m., both at WakeMed Soccer Park. If they do not have an available field there, it will be relocated to another field in the area. Both carry a $110 fee and include a ticket to see NCFC take on Hartford Sat. Apr. 27.
|
import json
from datetime import datetime, date, time, timedelta
from view_utils import get_offices, get_reservations_data, is_reservation_on_date
from view_utils import send_reservation_notification, send_reschedule_notificaion, send_cancel_notificaion
from django.conf import settings
from django.contrib.auth.decorators import login_required
from django.contrib.auth import authenticate, login as django_login, logout as django_logout
from django.views.decorators.csrf import csrf_exempt
from django.contrib import messages
from django.http import HttpResponseRedirect, HttpResponse, Http404
from django.shortcuts import render_to_response, get_object_or_404
from django.template import RequestContext
from django.template.loader import render_to_string
from django.utils.translation import ugettext_lazy as _
from medobs.reservations.forms import PatientForm, PatientDetailForm
from medobs.reservations.models import Office, Patient, Reservation
def front_page(request):
try:
if request.user.is_authenticated():
office = Office.objects.filter(published=True)[0]
else:
office = Office.objects.filter(published=True, authenticated_only=False)[0]
except IndexError:
return render_to_response(
"missing_config.html",
{},
context_instance=RequestContext(request)
)
return HttpResponseRedirect("/office/%d/" % office.id)
class DateInPast(Exception):
pass
class BadStatus(Exception):
pass
def office_page(request, office_id, for_date=None):
office = get_object_or_404(Office, published=True, pk=office_id)
if not request.user.is_authenticated() and office.authenticated_only: # authentication required
return HttpResponseRedirect("/")
reschedule_reservation = request.GET.get('reschedule')
if reschedule_reservation:
try:
reschedule_reservation = Reservation.objects.get(pk=reschedule_reservation)
except Reservation.DoesNotExist:
raise Http404
form = None
message = None
start_date = date.today()
end_date = start_date + timedelta(office.days_to_generate)
dates = list(Reservation.objects.filter(date__gte=date.today()).dates("date", "day"))
if dates:
if not request.user.is_authenticated():
start_date = dates[0]
end_date = dates[-1]
if for_date:
actual_date = datetime.strptime(for_date, "%Y-%m-%d").date()
if actual_date < start_date:
actual_date = start_date
else:
actual_date = start_date
reservation_id = 0
if request.method == 'POST':
action = request.POST.get("action")
if action == "reschedule":
old_reservation = get_object_or_404(Reservation, pk=request.POST.get("old_reservation"))
new_reservation = get_object_or_404(Reservation, pk=request.POST.get("reservation"))
if new_reservation.patient or new_reservation.get_actual_status() == Reservation.STATUS_DISABLED:
messages.error(
request,
render_to_string(
"messages/reschedule_failed.html", {
"old_reservation": old_reservation,
"new_reservation": new_reservation,
}
)
)
return HttpResponseRedirect("/status/%d/" % new_reservation.pk)
actual_date = new_reservation.date
new_reservation.patient = old_reservation.patient
new_reservation.exam_kind = old_reservation.exam_kind
old_reservation.cancel()
new_reservation.save()
old_reservation.save()
send_reschedule_notificaion(old_reservation, new_reservation)
messages.success(
request,
render_to_string(
"messages/rescheduled.html", {
"old_reservation": old_reservation,
"new_reservation": new_reservation,
}
)
)
return HttpResponseRedirect("/status/%d/" % new_reservation.pk)
else:
form = PatientForm(request.POST)
form.fields["exam_kind"].queryset = office.exam_kinds.all()
if form.is_valid():
try:
reservation = form.cleaned_data["reservation"]
actual_date = reservation.date
reservation_id = reservation.id
if request.user.is_authenticated():
if reservation.status not in (Reservation.STATUS_ENABLED, Reservation.STATUS_IN_HELD):
raise BadStatus()
else:
if reservation.status != Reservation.STATUS_ENABLED:
raise BadStatus()
datetime_limit = datetime.combine(date.today() + timedelta(1), time(0, 0))
if reservation.starting_time < datetime_limit:
raise DateInPast()
hexdigest = Patient.get_ident_hash(form.cleaned_data["ident_hash"])
patient, patient_created = Patient.objects.get_or_create(
ident_hash=hexdigest,
defaults={
"first_name": form.cleaned_data["first_name"],
"last_name": form.cleaned_data["last_name"],
"ident_hash": form.cleaned_data["ident_hash"],
"phone_number": form.cleaned_data["phone_number"],
"email": form.cleaned_data["email"],
}
)
if not patient_created and patient.has_reservation():
messages.error(
request,
render_to_string(
"messages/creation_failed.html", {
"reservations": patient.actual_reservations(),
"user": request.user,
}
)
)
return HttpResponseRedirect("/status/%d/" % reservation.pk)
if not patient_created:
patient.first_name = form.cleaned_data["first_name"]
patient.last_name = form.cleaned_data["last_name"]
patient.phone_number = form.cleaned_data["phone_number"]
patient.email = form.cleaned_data["email"]
patient.save()
reservation.patient = patient
reservation.exam_kind = form.cleaned_data["exam_kind"]
reservation.status = Reservation.STATUS_ENABLED # clean 'in held' state
reservation.reservation_time = datetime.now()
reservation.reserved_by = request.user.username
reservation.save()
send_reservation_notification(reservation)
messages.success(
request,
render_to_string(
"messages/created.html", {
"reservation": reservation,
}
)
)
return HttpResponseRedirect("/status/%d/" % reservation.pk)
except DateInPast:
message = _("Can't make reservation for current day or day in the past.")
except BadStatus:
message = _("Can't make reservation. Please try again.")
reservation_id = 0
else:
r_val = form["reservation"].value()
if r_val:
reservation_id = int(r_val)
actual_date = Reservation.objects.get(pk=reservation_id).date
if form is None:
form = PatientForm()
form.fields["exam_kind"].queryset = office.exam_kinds.all()
office_data = {
"id": office.id,
"name": office.name,
"reservations": json.dumps(
get_reservations_data(
office.reservations(actual_date),
all_attrs=request.user.is_authenticated()
)
),
"days_status": json.dumps(office.days_status(start_date, end_date))
}
data = {
"offices": get_offices(request.user),
"office": office_data,
"form": form,
"message": message,
"start_date": start_date,
"actual_date": actual_date,
"end_date": end_date,
"reservation_id": reservation_id,
"reschedule_mode": reschedule_reservation is not None
}
if reschedule_reservation:
data.update({
"reschedule_mode": True,
"reservation": reschedule_reservation
})
return render_to_response(
"index.html",
data,
context_instance=RequestContext(request)
)
def date_reservations(request, for_date, office_id):
office = get_object_or_404(Office, pk=office_id)
for_date = datetime.strptime(for_date, "%Y-%m-%d").date()
data = get_reservations_data(
office.reservations(for_date),
all_attrs=request.user.is_authenticated()
)
response = HttpResponse(json.dumps(data), "application/json")
response["Cache-Control"] = "no-cache"
return response
@login_required
def patient_details(request):
response_data = {
"first_name": "",
"last_name": "",
"phone_number": "",
"email": "",
}
if request.method == 'POST':
form = PatientDetailForm(request.POST)
if form.is_valid():
hexdigest = Patient.get_ident_hash(form.cleaned_data["ident_hash"])
try:
patient = Patient.objects.get(ident_hash=hexdigest)
response_data = {
"pk": patient.pk,
"first_name": patient.first_name,
"last_name": patient.last_name,
"phone_number": patient.phone_number,
"email": patient.email,
}
except Patient.DoesNotExist:
pass
return HttpResponse(json.dumps(response_data), "application/json")
@login_required
def hold_reservation(request, r_id):
reservation = get_object_or_404(Reservation, pk=r_id)
if reservation.status == Reservation.STATUS_ENABLED:
reservation.status = Reservation.STATUS_IN_HELD
reservation.reservation_time = datetime.now()
reservation.reserved_by = request.user.username
reservation.save()
response_data = {"status_ok": True}
else:
response_data = {"status_ok": False}
response = HttpResponse(json.dumps(response_data), "application/json")
response["Cache-Control"] = "no-cache"
return response
@login_required
def unhold_reservation(request, r_id):
reservation = get_object_or_404(Reservation, pk=r_id)
if reservation.status == Reservation.STATUS_IN_HELD:
reservation.status = Reservation.STATUS_ENABLED
reservation.reservation_time = None
reservation.reserved_by = ""
reservation.save()
response_data = {"status_ok": True}
else:
response_data = {"status_ok": False}
response = HttpResponse(json.dumps(response_data), "application/json")
response["Cache-Control"] = "no-cache"
return response
@login_required
def cancel_reservation(request):
reservation = get_object_or_404(Reservation, pk=request.POST.get('reservation_id'))
tmp_reservation = Reservation(
office=reservation.office,
patient=reservation.patient,
date=reservation.date,
time=reservation.time,
exam_kind=reservation.exam_kind
)
if reservation.patient is not None:
reservation.cancel()
reservation.save()
send_cancel_notificaion(tmp_reservation)
messages.success(
request,
render_to_string(
"messages/canceled.html", {
"reservation": tmp_reservation,
}
)
)
else:
messages.error(
request,
render_to_string(
"messages/cancel_failed.html", {
"reservation": tmp_reservation
}
)
)
return HttpResponseRedirect("/status/%d/" % reservation.pk)
@login_required
def disable_reservation(request, r_id):
reservation = get_object_or_404(Reservation, pk=r_id)
if reservation.status in (Reservation.STATUS_ENABLED, Reservation.STATUS_IN_HELD) and request.user.is_staff:
reservation.status = Reservation.STATUS_DISABLED
reservation.reservation_time = datetime.now()
reservation.reserved_by = request.user.username
reservation.save()
response_data = {"status_ok": True}
else:
response_data = {"status_ok": False}
response = HttpResponse(json.dumps(response_data), "application/json")
response["Cache-Control"] = "no-cache"
return response
@login_required
def enable_reservation(request, r_id):
reservation = get_object_or_404(Reservation, pk=r_id)
if reservation.status == Reservation.STATUS_DISABLED and request.user.is_staff:
reservation.status = Reservation.STATUS_ENABLED
reservation.reservation_time = None
reservation.reserved_by = ""
reservation.save()
response_data = {"status_ok": True}
else:
response_data = {"status_ok": False}
response = HttpResponse(json.dumps(response_data), "application/json")
response["Cache-Control"] = "no-cache"
return response
@login_required
def list_reservations(request, for_date, office_id):
for_date = datetime.strptime(for_date, "%Y-%m-%d").date()
office = get_object_or_404(Office, pk=office_id)
return render_to_response(
"list/office.html",
{
"for_date": for_date,
"office": office,
"reservations": get_reservations_data(office.reservations(for_date)),
},
context_instance=RequestContext(request)
)
@login_required
def reservation_details(request, r_id):
reservation = get_object_or_404(Reservation, pk=r_id)
response_data = {
"first_name": reservation.patient.first_name,
"last_name": reservation.patient.last_name,
"phone_number": reservation.patient.phone_number,
"email": reservation.patient.email,
"exam_kind": reservation.exam_kind_id,
}
response = HttpResponse(json.dumps(response_data), "application/json")
response["Cache-Control"] = "no-cache"
return response
@login_required
def patient_reservations(request):
response_data = {"patient": None}
if request.method == 'POST':
ident_hash = request.POST.get("ident_hash", "")
if len(ident_hash) < 12:
ident_hash = Patient.get_ident_hash(ident_hash)
try:
response_data["patient"] = Patient.objects.get(ident_hash=ident_hash)
except Patient.DoesNotExist:
raise Http404
return render_to_response(
"list/patient.html",
response_data,
context_instance=RequestContext(request)
)
raise Http404
def days_status(request, year, month, office_id):
office = get_object_or_404(Office, pk=office_id)
year = int(year)
month = int(month)
start_date = date(year, month, 1)
if month == 12:
end_date = date(year+1, 1, 31)
else:
end_date = date(year, month + 1, 1) - timedelta(1)
response_data = office.days_status(start_date, end_date)
response = HttpResponse(json.dumps(response_data), "application/json")
response["Cache-Control"] = "no-cache"
return response
@csrf_exempt
def login(request):
try:
if request.POST:
username = request.POST["username"]
password = request.POST["password"]
if username and password:
user = authenticate(username=username, password=password)
if user and user.is_authenticated():
django_login(request, user)
return HttpResponse(status=200)
except:
pass
return HttpResponse(status=401)
@login_required
def logout(request):
django_logout(request)
return HttpResponse(status=200)
@login_required
def list_offices(request):
response_data = [{
"id": office.pk,
"name": office.name,
"street": office.street,
"zip_code": office.zip_code,
"city": office.city,
"email": office.email,
"order": office.order,
"authenticated_only": office.authenticated_only,
"phones": [phone.number for phone in office.phone_numbers.all()],
} for office in Office.objects.filter(published=True)]
return HttpResponse(json.dumps(response_data), "application/json")
@login_required
def enable_auth_only(request, r_id):
reservation = get_object_or_404(Reservation, pk=r_id)
reservation.authenticated_only = True
reservation.save()
response_data = {"status_ok": True}
response = HttpResponse(json.dumps(response_data), "application/json")
response["Cache-Control"] = "no-cache"
return response
@login_required
def disable_auth_only(request, r_id):
reservation = get_object_or_404(Reservation, pk=r_id)
reservation.authenticated_only = False
reservation.save()
response_data = {"status_ok": True}
response = HttpResponse(json.dumps(response_data), "application/json")
response["Cache-Control"] = "no-cache"
return response
# vim: set ts=4 sts=4 sw=4 noet:
|
Since joining Farm Bureau in 2000, I have learned a great deal about the insurance industry. I have also learned a great deal about people in the Hickory area. I fully understand the important role that insurance plays in their lives. As a long time client, and now as an agent, I'd like to offer you the same services that I have received since 1985. Give me a call today for a free, complete, confidential insurance review. Thanks for allowing me the opportunity to serve you now and in the future.
|
#
# Copyright 2015-2019 University of Southern California
# Distributed under the Apache License, Version 2.0. See LICENSE for more info.
#
"""Filesystem-backed object bulk storage for Hatrac.
This module handles only low-level byte storage. Object and
object-version lifecycle and authorization is handled by the caller.
"""
import os
import hashlib
import base64
import binascii
import random
import struct
import io
from ...core import BadRequest, Conflict, coalesce
def make_file(dirname, relname, accessmode):
"""Create and open file with accessmode, including missing parents.
Returns fp.
"""
# TODO: test for conflicts during creation?
filename = "%s/%s" % (dirname, relname)
if not os.path.exists(dirname):
os.makedirs(dirname, mode=0o755)
return open(filename, accessmode, 0)
class HatracStorage (object):
"""Implement HatracStorage API using basic POSIX filesystem mapping.
A configured storage rootdir, object name, and object version
are combined to form one filename to store the immutable
object:
/ rootdir / object_name : object_version
consistent with Hatrac rules. The incoming name may include
RFC3986 percent-encoded URL characters, which we assume our
filesystem can tolerate.
"""
track_chunks = False
_bufsize = 1024**2
def __init__(self, config):
self.root = config.get('storage_path', '/var/www/hatrac')
def _dirname_relname(self, name, version):
"""Map Hatrac identifiers to backend storage."""
# TODO: consider hashing if too many namespaces exist at top level
assert name
assert version
assert ':' not in version
dirname = self.root
nameparts = [ n for n in name.split('/') if n ]
dirparts = nameparts[0:-1]
relpart = nameparts[-1]
relname = "%s:%s" % (relpart, version)
assert relpart
if dirparts:
dirname = "%s/%s" % (self.root, "/".join(dirparts))
else:
dirname = self.root
return (dirname, relname)
def create_from_file(self, name, input, nbytes, metadata={}):
"""Create an entire file-version object from input content, returning version ID."""
version = base64.b32encode(
(struct.pack('Q', random.getrandbits(64))
+ struct.pack('Q', random.getrandbits(64)))[0:26]
).decode().replace('=', '') # strip off '=' padding
dirname, relname = self._dirname_relname(name, version)
f = make_file(dirname, relname, 'wb')
# upload whole content at offset 0 (for code reuse)
self.upload_chunk_from_file(None, None, 0, 0, input, nbytes, metadata, f)
return version
def create_upload(self, name, nbytes=None, metadata={}):
upload_id = self.create_from_file(name, io.BytesIO(b''), 0)
return upload_id
def cancel_upload(self, name, upload_id):
# this backend uses upload_id as version_id
self.delete(name, upload_id)
return None
def finalize_upload(self, name, upload_id, chunk_data, metadata={}):
# nothing changes in storage for this backend strategy
version_id = upload_id
assert chunk_data is None
# aggressively validate uploaded content against pre-defined MD5 if it was given at job start
if 'content-md5' in metadata:
dirname, relname = self._dirname_relname(name, version_id)
fullname = "%s/%s" % (dirname, relname)
f = open(fullname, "rb")
hasher = hashlib.md5()
eof = False
while not eof:
buf = f.read(self._bufsize)
if len(buf) != 0:
hasher.update(buf)
else:
eof = True
stored_md5 = hasher.digest()
if metadata['content-md5'] != stored_md5:
raise Conflict(
'Current uploaded content MD5 %s does not match expected %s.'
% (binascii.hexlify(stored_md5), binascii.hexlify(metadata['content-md5']))
)
return version_id
def upload_chunk_from_file(self, name, version, position, chunksize, input, nbytes, metadata={}, f=None):
"""Save chunk data into storage.
If self.track_chunks, return value must be None or a value
that can be serialized using webauthn2.util.jsonWriteRaw,
i.e. dict, array, or scalar values.
"""
if f is None:
dirname, relname = self._dirname_relname(name, version)
fullname = "%s/%s" % (dirname, relname)
f = open(fullname, "r+b")
f.seek(position*chunksize)
if 'content-md5' in metadata:
hasher = hashlib.md5()
else:
hasher = None
rbytes = 0
eof = False
while not eof:
if nbytes is not None:
bufsize = min(nbytes-rbytes, self._bufsize)
else:
bufsize = self._bufsize
buf = input.read(bufsize)
f.write(buf)
bufsize = len(buf)
rbytes += bufsize
if hasher:
hasher.update(buf)
if nbytes is not None:
if rbytes >= nbytes:
eof = True
elif bufsize == 0:
f.close()
raise BadRequest('Only received %s of %s expected bytes.' % (rbytes, nbytes))
elif bufsize == 0:
eof = True
if hasher:
received_md5 = hasher.digest()
if metadata['content-md5'] != received_md5:
raise BadRequest(
'Received content MD5 %r does not match expected %r.'
% (received_md5, metadata['content-md5'])
#% (binascii.hexlify(received_md5), binascii.hexlify(metadata['content-md5'].encode()))
)
return "test"
def get_content(self, name, version, metadata={}):
return self.get_content_range(name, version, metadata)
def get_content_range(self, name, version, metadata={}, get_slice=None):
"""Return (nbytes, metadata, data_iterator) tuple for existing file-version object."""
dirname, relname = self._dirname_relname(name, version)
fullname = "%s/%s" % (dirname, relname)
nbytes = os.path.getsize(fullname)
if get_slice is not None:
pos = coalesce(get_slice.start, 0)
limit = coalesce(get_slice.stop, nbytes)
else:
pos = 0
limit = nbytes
if pos != 0 or limit != nbytes:
# most object metadata does not apply to partial read content
metadata = {
k: v
for k, v in metadata.items()
if k in {'content-type'}
}
length = limit - pos
def helper():
if 'content-md5' in metadata:
hasher = hashlib.md5()
else:
hasher = None
rpos = pos
eof = False
with open(fullname, 'rb') as f:
f.seek(rpos)
while not eof:
buf = f.read(min(limit-rpos, self._bufsize))
buflen = len(buf)
rpos += buflen
if hasher:
hasher.update(buf)
if rpos >= (limit-1):
eof = True
elif buflen == 0:
raise IOError('Read truncated at %s when %s expected.' % (rpos, limit))
if eof and hasher:
retrieved_md5 = hasher.digest()
if metadata['content-md5'] != retrieved_md5:
raise IOError(
'Retrieved content MD5 %s does not match expected %s.'
% (binascii.hexlify(retrieved_md5), binascii.hexlify(metadata['content-md5']))
)
yield buf
return (length, metadata, helper())
def delete(self, name, version):
"""Delete object version."""
dirname, relname = self._dirname_relname(name, version)
fullname = "%s/%s" % (dirname, relname)
os.remove(fullname)
def delete_namespace(self, name):
"""Tidy up after an empty namespace that has been deleted."""
dirname, relname = self._dirname_relname(name, 'dummy')
try:
os.removedirs(dirname)
except OSError:
pass
|
Attention: Riendeau Hyundai is currently looking to purchase additional used 2008 Chevrolet Aveo-5. Come back soon or send us a search request and we will contact you as soon as we receive more 2008 Chevrolet Aveo-5. You can also contact us directly by phone at 1-844-881-6377 to provide us with further details as to the exact vehicle you're looking for. This is a free service and Riendeau Hyundai has access to thousands of other vehicles near Ste-Julie (Montreal South Shore) and all over the province of Quebec.
|
from yowsup.structs import ProtocolTreeNode
from .notification_contact import ContactNotificationProtocolEntity
class ContactsSyncNotificationProtocolEntity(ContactNotificationProtocolEntity):
'''
<notification from="4917667738517@s.whatsapp.net" t="1437251557" offline="0" type="contacts" id="4174521704">
<sync after="1437251557"></sync>
</notification>
'''
def __init__(self, _id, _from, timestamp, notify, offline, after):
super(ContactsSyncNotificationProtocolEntity, self).__init__(_id, _from, timestamp, notify, offline)
self.setData(after)
def setData(self, after):
self.after = int(after)
def toProtocolTreeNode(self):
node = super(ContactsSyncNotificationProtocolEntity, self).toProtocolTreeNode()
syncNode = ProtocolTreeNode("sync", {"after": str(self.after)}, None, None)
node.addChild(syncNode)
return node
@staticmethod
def fromProtocolTreeNode(node):
entity = ContactNotificationProtocolEntity.fromProtocolTreeNode(node)
entity.__class__ = ContactsSyncNotificationProtocolEntity
syncNode = node.getChild("sync")
entity.setData(syncNode.getAttributeValue("after"))
return entity
|
Eurostat, the EU's statistical agency, said the jobless rate in the 19 country eurozone had fallen to 10.4% from 10.5% in November.
The eurozone's total jobless figure is better than analysts expected because of fears surrounding the slowdown in China and volatility in financial markets.
Jennifer McKeown, Capital Economics analyst, welcomed the improvement, but said a slowing global economy could reverse any progress.
"We still think that the European Central Bank has a lot more work to do," she added.
Howard Archer, economist at IHS Global Insight, said he expected continued modest growth, with unemployment falling to below 10% by the end of the year if current trends continued.
ECB President Mario Draghi has already indicated another stimulus package could be unveiled as soon as next month.
|
import sys
import rethinkdb as r
from disco.core import Job
import csv
class GroupSum(Job):
def __init__(self, group_by, fields, *args, **kwargs):
self.group_by = int(group_by)
self.fields = map(int, fields)
super(GroupSum, self).__init__(*args, **kwargs)
@staticmethod
def map_reader(fd, size, url, params):
reader = csv.reader(fd, delimiter=',')
for row in reader:
if len(row) <= 1:
continue
yield row
def map(self, line, params):
words = line
total = 0
result = []
for word in range(len(words)):
if word == self.group_by:
continue
try:
if word in self.fields:
total += int(words[word])
else:
result.append(int(words[word]))
except:
pass
result.insert(0, total)
yield words[self.group_by], result
def reduce(self, rows_iter, out, params):
from disco.util import kvgroup
final = {}
for key, result in kvgroup(rows_iter):
if key not in final:
final[key] = []
for line in result:
for value in range(len(line)):
if len(final[key]) <= value:
final[key].append(line[value])
else:
final[key][value] += line[value]
out.add(final, "a")
if __name__ == '__main__':
from add import GroupSum
db = r.connect(**{
'host': 'batman.krunchr.net',
'port': 28019,
'auth_key': '',
'db': 'krunchr'
})
dataset = r.db("krunchr").table('datasets').get(sys.argv[1]).run(db)
fields = [str(dataset['fields'].index(field)) for field in sys.argv[2:]]
group_by = dataset['fields'].index(sys.argv[2])
job = GroupSum(group_by, fields)
job.run(input=['data:%s' % sys.argv[1]])
from disco.core import result_iterator
table_name = sys.argv[1].replace('-', '_')
try:
r.db("krunchr").table_create(table_name).run(db)
except:
pass
lines = []
fields = dataset['fields']
fields.remove(sys.argv[2])
for line in result_iterator(job.wait(show=True)):
for key in line[0]:
insert = {sys.argv[2]: key}
if len(line[0][key]) < len(fields):
continue
insert.update({field: line[0][key][fields.index(field)-1] for field in fields})
r.table(table_name).insert(insert).run(db)
r.table('datasets').filter({'id': sys.argv[1]}).update({'ready': True}).run(db)
|
Dedrick Asante-Muhammad and Kylie Patterson of the Racial Wealth Divide Initiative at the Corporation for Enterprise Development write about their partnerships with organizations such as Urban Alliance to expand economic opportunity for residents of low-income communities. Read the full op-ed below and in The Baltimore Sun.
The racial economic divide in Baltimore isn’t pretty.
Average white household income in Baltimore is nearly two times that of black households. The unemployment rate for workers of color is three times the rate for white workers. And just 13 percent of black adults in Baltimore finish a bachelor’s degree or higher compared to 51 percent of white adults, according to a recent report from the Racial Wealth Divide Initiative at the Corporation for Enterprise Development (CFED).
For starters, Baltimore and its people of color are performing better than other big city communities of color. Baltimore’s African American median income of $33,801 beats Chicago ($30,303), New Orleans ($25,806) and Miami ($21,212), each of which was also recently studied by CFED. Even in comparison to whites, median-income blacks in Baltimore are doing better, making 54 percent of white household income compared to anywhere between 30 percent and 43 percent in the three other cities.
Considering these macro- and micro-economic factors, Baltimore is well positioned to foster economic growth over the next 20 years. Most often economic growth in a city means the displacement of economically disenfranchised communities and new developments that manage to maintain historic racial segregation. To avoid this national pattern of “development,” Baltimore must implement a “racial equity audit” of all future development and investments. That means taking deliberate policy intervention to limit gentrification and ongoing housing segregation while ensuring affordable home ownership and investment in low wealth communities — and not just resources for new high-wealth residents.
Government leaders can begin this process by taking a hard look at current policies and applying the Associated Black Charities’ “Ten Essential Questions for Policy Development,” which place racial equity at the forefront of effective policy. Among the key questions to consider: Will the policy increase access and opportunity for communities of color? Will the policy protect against racial violence, racial profiling and discrimination?
As part of this process, civic and elected leaders should engage nonprofits working in Baltimore communities, especially those led by and serving people of color. CFED’s Racial Wealth Divide Initiative is currently working with six such organizations — Bon Secours Community Works, Center for Urban Families, Druid Heights CDC, Latino Economic Development Center, Muse 360 Arts and Urban Alliance/Baltimore —to support and strengthen their efforts to expand economic opportunity in low-income communities.
Baltimore, long a symbol of racial inequality and economic disenfranchisement, has the opportunity to become a national leader in addressing racial economic inequality. The question is will the city’s leadership, along with the business, civic and philanthropic communities, take on the challenge and do the hard work to make Baltimore a 21st century model of development — development that bridges the divides of the past and creates a new and inspiring future for the city, its residents and the entire country?
|
# Copyright 2014 OpenStack Foundation
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
import collections
from oslo_config import cfg
from octavia.common import constants
from octavia.tests.common import sample_certs
CONF = cfg.CONF
def sample_amphora_tuple(id='sample_amphora_id_1', lb_network_ip='10.0.1.1',
vrrp_ip='10.1.1.1', ha_ip='192.168.10.1',
vrrp_port_id='1234', ha_port_id='1234', role=None,
status='ACTIVE', vrrp_interface=None,
vrrp_priority=None, api_version='0.5'):
in_amphora = collections.namedtuple(
'amphora', 'id, lb_network_ip, vrrp_ip, ha_ip, vrrp_port_id, '
'ha_port_id, role, status, vrrp_interface,'
'vrrp_priority, api_version')
return in_amphora(
id=id,
lb_network_ip=lb_network_ip,
vrrp_ip=vrrp_ip,
ha_ip=ha_ip,
vrrp_port_id=vrrp_port_id,
ha_port_id=ha_port_id,
role=role,
status=status,
vrrp_interface=vrrp_interface,
vrrp_priority=vrrp_priority,
api_version=api_version)
RET_PERSISTENCE = {
'type': 'HTTP_COOKIE',
'cookie_name': None}
RET_MONITOR_1 = {
'id': 'sample_monitor_id_1',
'type': 'HTTP',
'delay': 30,
'timeout': 31,
'fall_threshold': 3,
'rise_threshold': 2,
'http_method': 'GET',
'url_path': '/index.html',
'expected_codes': '418',
'enabled': True,
'http_version': 1.0,
'domain_name': None}
RET_MONITOR_2 = {
'id': 'sample_monitor_id_2',
'type': 'HTTP',
'delay': 30,
'timeout': 31,
'fall_threshold': 3,
'rise_threshold': 2,
'http_method': 'GET',
'url_path': '/healthmon.html',
'expected_codes': '418',
'enabled': True,
'http_version': 1.0,
'domain_name': None}
RET_MEMBER_1 = {
'id': 'sample_member_id_1',
'address': '10.0.0.99',
'protocol_port': 82,
'weight': 13,
'subnet_id': '10.0.0.1/24',
'enabled': True,
'operating_status': 'ACTIVE',
'monitor_address': None,
'monitor_port': None,
'backup': False}
RET_MEMBER_2 = {
'id': 'sample_member_id_2',
'address': '10.0.0.98',
'protocol_port': 82,
'weight': 13,
'subnet_id': '10.0.0.1/24',
'enabled': True,
'operating_status': 'ACTIVE',
'monitor_address': None,
'monitor_port': None,
'backup': False}
RET_MEMBER_3 = {
'id': 'sample_member_id_3',
'address': '10.0.0.97',
'protocol_port': 82,
'weight': 13,
'subnet_id': '10.0.0.1/24',
'enabled': True,
'operating_status': 'ACTIVE',
'monitor_address': None,
'monitor_port': None,
'backup': False}
RET_POOL_1 = {
'id': 'sample_pool_id_1',
'protocol': 'http',
'lb_algorithm': 'roundrobin',
'members': [RET_MEMBER_1, RET_MEMBER_2],
'health_monitor': RET_MONITOR_1,
'session_persistence': RET_PERSISTENCE,
'enabled': True,
'operating_status': 'ACTIVE',
'stick_size': '10k',
constants.HTTP_REUSE: False,
'ca_tls_path': '',
'crl_path': '',
'tls_enabled': False}
RET_POOL_2 = {
'id': 'sample_pool_id_2',
'protocol': 'http',
'lb_algorithm': 'roundrobin',
'members': [RET_MEMBER_3],
'health_monitor': RET_MONITOR_2,
'session_persistence': RET_PERSISTENCE,
'enabled': True,
'operating_status': 'ACTIVE',
'stick_size': '10k',
constants.HTTP_REUSE: False,
'ca_tls_path': '',
'crl_path': '',
'tls_enabled': False}
RET_DEF_TLS_CONT = {'id': 'cont_id_1', 'allencompassingpem': 'imapem',
'primary_cn': 'FakeCn'}
RET_SNI_CONT_1 = {'id': 'cont_id_2', 'allencompassingpem': 'imapem2',
'primary_cn': 'FakeCn'}
RET_SNI_CONT_2 = {'id': 'cont_id_3', 'allencompassingpem': 'imapem3',
'primary_cn': 'FakeCn2'}
RET_L7RULE_1 = {
'id': 'sample_l7rule_id_1',
'type': constants.L7RULE_TYPE_PATH,
'compare_type': constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
'key': None,
'value': '/api',
'invert': False,
'enabled': True}
RET_L7RULE_2 = {
'id': 'sample_l7rule_id_2',
'type': constants.L7RULE_TYPE_HEADER,
'compare_type': constants.L7RULE_COMPARE_TYPE_CONTAINS,
'key': 'Some-header',
'value': 'This\\ string\\\\\\ with\\ stuff',
'invert': True,
'enabled': True}
RET_L7RULE_3 = {
'id': 'sample_l7rule_id_3',
'type': constants.L7RULE_TYPE_COOKIE,
'compare_type': constants.L7RULE_COMPARE_TYPE_REGEX,
'key': 'some-cookie',
'value': 'this.*|that',
'invert': False,
'enabled': True}
RET_L7RULE_4 = {
'id': 'sample_l7rule_id_4',
'type': constants.L7RULE_TYPE_FILE_TYPE,
'compare_type': constants.L7RULE_COMPARE_TYPE_EQUAL_TO,
'key': None,
'value': 'jpg',
'invert': False,
'enabled': True}
RET_L7RULE_5 = {
'id': 'sample_l7rule_id_5',
'type': constants.L7RULE_TYPE_HOST_NAME,
'compare_type': constants.L7RULE_COMPARE_TYPE_ENDS_WITH,
'key': None,
'value': '.example.com',
'invert': False,
'enabled': True}
RET_L7RULE_6 = {
'id': 'sample_l7rule_id_6',
'type': constants.L7RULE_TYPE_HOST_NAME,
'compare_type': constants.L7RULE_COMPARE_TYPE_ENDS_WITH,
'key': None,
'value': '.example.com',
'invert': False,
'enabled': False}
RET_L7POLICY_1 = {
'id': 'sample_l7policy_id_1',
'action': constants.L7POLICY_ACTION_REDIRECT_TO_POOL,
'redirect_pool': RET_POOL_2,
'redirect_url': None,
'redirect_prefix': None,
'enabled': True,
'l7rules': [RET_L7RULE_1],
'redirect_http_code': None}
RET_L7POLICY_2 = {
'id': 'sample_l7policy_id_2',
'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_pool': None,
'redirect_url': 'http://www.example.com',
'redirect_prefix': None,
'enabled': True,
'l7rules': [RET_L7RULE_2, RET_L7RULE_3],
'redirect_http_code': 302}
RET_L7POLICY_3 = {
'id': 'sample_l7policy_id_3',
'action': constants.L7POLICY_ACTION_REJECT,
'redirect_pool': None,
'redirect_url': None,
'redirect_prefix': None,
'enabled': True,
'l7rules': [RET_L7RULE_4, RET_L7RULE_5],
'redirect_http_code': None}
RET_L7POLICY_4 = {
'id': 'sample_l7policy_id_4',
'action': constants.L7POLICY_ACTION_REJECT,
'redirect_pool': None,
'redirect_url': None,
'redirect_prefix': None,
'enabled': True,
'l7rules': [],
'redirect_http_code': None}
RET_L7POLICY_5 = {
'id': 'sample_l7policy_id_5',
'action': constants.L7POLICY_ACTION_REJECT,
'redirect_pool': None,
'redirect_url': None,
'redirect_prefix': None,
'enabled': False,
'l7rules': [RET_L7RULE_5],
'redirect_http_code': None}
RET_L7POLICY_6 = {
'id': 'sample_l7policy_id_6',
'action': constants.L7POLICY_ACTION_REJECT,
'redirect_pool': None,
'redirect_url': None,
'redirect_prefix': None,
'enabled': True,
'l7rules': [],
'redirect_http_code': None}
RET_L7POLICY_7 = {
'id': 'sample_l7policy_id_7',
'action': constants.L7POLICY_ACTION_REDIRECT_PREFIX,
'redirect_pool': None,
'redirect_url': None,
'redirect_prefix': 'https://example.com',
'enabled': True,
'l7rules': [RET_L7RULE_2, RET_L7RULE_3],
'redirect_http_code': 302}
RET_L7POLICY_8 = {
'id': 'sample_l7policy_id_8',
'action': constants.L7POLICY_ACTION_REDIRECT_TO_URL,
'redirect_pool': None,
'redirect_url': 'http://www.example.com',
'redirect_prefix': None,
'enabled': True,
'l7rules': [RET_L7RULE_2, RET_L7RULE_3],
'redirect_http_code': None}
RET_LISTENER = {
'id': 'sample_listener_id_1',
'protocol_port': '80',
'protocol': 'HTTP',
'protocol_mode': 'http',
'default_pool': RET_POOL_1,
'connection_limit': constants.HAPROXY_MAX_MAXCONN,
'amphorae': [sample_amphora_tuple()],
'peer_port': 1024,
'topology': 'SINGLE',
'user_log_format': '12345\\ sample_loadbalancer_id_1\\ %f\\ %ci\\ %cp\\ '
'%t\\ %{+Q}r\\ %ST\\ %B\\ %U\\ %[ssl_c_verify]\\ '
'%{+Q}[ssl_c_s_dn]\\ %b\\ %s\\ %Tt\\ %tsc',
'pools': [RET_POOL_1],
'l7policies': [],
'enabled': True,
'insert_headers': {},
'timeout_client_data': 50000,
'timeout_member_connect': 5000,
'timeout_member_data': 50000,
'timeout_tcp_inspect': 0,
}
RET_LISTENER_L7 = {
'id': 'sample_listener_id_1',
'protocol_port': '80',
'protocol': 'HTTP',
'protocol_mode': 'http',
'default_pool': RET_POOL_1,
'connection_limit': constants.HAPROXY_MAX_MAXCONN,
'amphorae': [sample_amphora_tuple()],
'peer_port': 1024,
'topology': 'SINGLE',
'user_log_format': '12345\\ sample_loadbalancer_id_1\\ %f\\ %ci\\ %cp\\ '
'%t\\ %{+Q}r\\ %ST\\ %B\\ %U\\ %[ssl_c_verify]\\ '
'%{+Q}[ssl_c_s_dn]\\ %b\\ %s\\ %Tt\\ %tsc',
'pools': [RET_POOL_1, RET_POOL_2],
'l7policies': [RET_L7POLICY_1, RET_L7POLICY_2, RET_L7POLICY_3,
RET_L7POLICY_4, RET_L7POLICY_5, RET_L7POLICY_6,
RET_L7POLICY_7],
'enabled': True,
'insert_headers': {},
'timeout_client_data': 50000,
'timeout_member_connect': 5000,
'timeout_member_data': 50000,
'timeout_tcp_inspect': 0,
}
RET_LISTENER_TLS = {
'id': 'sample_listener_id_1',
'protocol_port': '443',
'protocol': 'TERMINATED_HTTPS',
'protocol_mode': 'http',
'default_pool': RET_POOL_1,
'connection_limit': constants.HAPROXY_MAX_MAXCONN,
'tls_certificate_id': 'cont_id_1',
'default_tls_path': '/etc/ssl/sample_loadbalancer_id_1/fakeCN.pem',
'default_tls_container': RET_DEF_TLS_CONT,
'pools': [RET_POOL_1],
'l7policies': [],
'enabled': True,
'insert_headers': {}}
RET_LISTENER_TLS_SNI = {
'id': 'sample_listener_id_1',
'protocol_port': '443',
'protocol': 'TERMINATED_HTTPS',
'default_pool': RET_POOL_1,
'connection_limit': constants.HAPROXY_MAX_MAXCONN,
'tls_certificate_id': 'cont_id_1',
'default_tls_path': '/etc/ssl/sample_loadbalancer_id_1/fakeCN.pem',
'default_tls_container': RET_DEF_TLS_CONT,
'crt_dir': '/v2/sample_loadbalancer_id_1',
'sni_container_ids': ['cont_id_2', 'cont_id_3'],
'sni_containers': [RET_SNI_CONT_1, RET_SNI_CONT_2],
'pools': [RET_POOL_1],
'l7policies': [],
'enabled': True,
'insert_headers': {}}
RET_AMPHORA = {
'id': 'sample_amphora_id_1',
'lb_network_ip': '10.0.1.1',
'vrrp_ip': '10.1.1.1',
'ha_ip': '192.168.10.1',
'vrrp_port_id': '1234',
'ha_port_id': '1234',
'role': None,
'status': 'ACTIVE',
'vrrp_interface': None,
'vrrp_priority': None}
RET_LB = {
'host_amphora': RET_AMPHORA,
'id': 'sample_loadbalancer_id_1',
'vip_address': '10.0.0.2',
'listener': RET_LISTENER,
'topology': 'SINGLE',
'enabled': True,
'global_connection_limit': constants.HAPROXY_MAX_MAXCONN}
RET_LB_L7 = {
'host_amphora': RET_AMPHORA,
'id': 'sample_loadbalancer_id_1',
'vip_address': '10.0.0.2',
'listener': RET_LISTENER_L7,
'topology': 'SINGLE',
'enabled': True,
'global_connection_limit': constants.HAPROXY_MAX_MAXCONN}
UDP_SOURCE_IP_BODY = {
'type': constants.SESSION_PERSISTENCE_SOURCE_IP,
'persistence_timeout': 33,
'persistence_granularity': '255.0.0.0'
}
RET_UDP_HEALTH_MONITOR = {
'id': 'sample_monitor_id_1',
'type': constants.HEALTH_MONITOR_UDP_CONNECT,
'delay': 30,
'timeout': 31,
'enabled': True,
'fall_threshold': 3,
'check_script_path': (CONF.haproxy_amphora.base_path +
'/lvs/check/udp_check.sh')
}
UDP_HEALTH_MONITOR_NO_SCRIPT = {
'id': 'sample_monitor_id_1',
'check_script_path': None,
'delay': 30,
'enabled': True,
'fall_threshold': 3,
'timeout': 31,
'type': 'UDP'
}
RET_UDP_MEMBER = {
'id': 'member_id_1',
'address': '192.0.2.10',
'protocol_port': 82,
'weight': 13,
'enabled': True,
'monitor_address': None,
'monitor_port': None
}
RET_UDP_MEMBER_MONITOR_IP_PORT = {
'id': 'member_id_1',
'address': '192.0.2.10',
'protocol_port': 82,
'weight': 13,
'enabled': True,
'monitor_address': '192.168.1.1',
'monitor_port': 9000
}
UDP_MEMBER_1 = {
'id': 'sample_member_id_1',
'address': '10.0.0.99',
'enabled': True,
'protocol_port': 82,
'weight': 13,
'monitor_address': None,
'monitor_port': None
}
UDP_MEMBER_2 = {
'id': 'sample_member_id_2',
'address': '10.0.0.98',
'enabled': True,
'protocol_port': 82,
'weight': 13,
'monitor_address': None,
'monitor_port': None
}
RET_UDP_POOL = {
'id': 'sample_pool_id_1',
'enabled': True,
'health_monitor': UDP_HEALTH_MONITOR_NO_SCRIPT,
'lb_algorithm': 'rr',
'members': [UDP_MEMBER_1, UDP_MEMBER_2],
'protocol': 'udp',
'session_persistence': UDP_SOURCE_IP_BODY
}
RET_UDP_LISTENER = {
'connection_limit': 98,
'default_pool': {
'id': 'sample_pool_id_1',
'enabled': True,
'health_monitor': RET_UDP_HEALTH_MONITOR,
'lb_algorithm': 'rr',
'members': [UDP_MEMBER_1, UDP_MEMBER_2],
'protocol': 'udp',
'session_persistence': UDP_SOURCE_IP_BODY
},
'enabled': True,
'id': 'sample_listener_id_1',
'protocol_mode': 'udp',
'protocol_port': '80'
}
def sample_loadbalancer_tuple(proto=None, monitor=True, persistence=True,
persistence_type=None, tls=False, sni=False,
topology=None, l7=False, enabled=True):
proto = 'HTTP' if proto is None else proto
topology = 'SINGLE' if topology is None else topology
in_lb = collections.namedtuple(
'load_balancer', 'id, name, protocol, vip, listeners, amphorae,'
' enabled')
return in_lb(
id='sample_loadbalancer_id_1',
name='test-lb',
protocol=proto,
vip=sample_vip_tuple(),
topology=topology,
listeners=[sample_listener_tuple(proto=proto, monitor=monitor,
persistence=persistence,
persistence_type=persistence_type,
tls=tls,
sni=sni,
l7=l7,
enabled=enabled)],
enabled=enabled
)
def sample_listener_loadbalancer_tuple(proto=None, topology=None,
enabled=True):
proto = 'HTTP' if proto is None else proto
if topology and topology in ['ACTIVE_STANDBY', 'ACTIVE_ACTIVE']:
more_amp = True
else:
more_amp = False
topology = constants.TOPOLOGY_SINGLE
in_lb = collections.namedtuple(
'load_balancer', 'id, name, protocol, vip, amphorae, topology, '
'listeners, enabled, project_id')
return in_lb(
id='sample_loadbalancer_id_1',
name='test-lb',
protocol=proto,
vip=sample_vip_tuple(),
amphorae=[sample_amphora_tuple(role=constants.ROLE_MASTER),
sample_amphora_tuple(
id='sample_amphora_id_2',
lb_network_ip='10.0.1.2',
vrrp_ip='10.1.1.2',
role=constants.ROLE_BACKUP)]
if more_amp else [sample_amphora_tuple()],
topology=topology,
listeners=[],
enabled=enabled,
project_id='12345'
)
def sample_lb_with_udp_listener_tuple(
proto=None, topology=None, enabled=True, pools=None):
proto = 'HTTP' if proto is None else proto
if topology and topology in ['ACTIVE_STANDBY', 'ACTIVE_ACTIVE']:
more_amp = True
else:
more_amp = False
topology = constants.TOPOLOGY_SINGLE
listeners = [sample_listener_tuple(
proto=constants.PROTOCOL_UDP,
persistence_type=constants.SESSION_PERSISTENCE_SOURCE_IP,
persistence_timeout=33,
persistence_granularity='255.255.0.0',
monitor_proto=constants.HEALTH_MONITOR_UDP_CONNECT)]
in_lb = collections.namedtuple(
'load_balancer', 'id, name, protocol, vip, amphorae, topology, '
'pools, enabled, project_id, listeners')
return in_lb(
id='sample_loadbalancer_id_1',
name='test-lb',
protocol=proto,
vip=sample_vip_tuple(),
amphorae=[sample_amphora_tuple(role=constants.ROLE_MASTER),
sample_amphora_tuple(
id='sample_amphora_id_2',
lb_network_ip='10.0.1.2',
vrrp_ip='10.1.1.2',
role=constants.ROLE_BACKUP)]
if more_amp else [sample_amphora_tuple()],
topology=topology,
listeners=listeners,
pools=pools or [],
enabled=enabled,
project_id='12345'
)
def sample_vrrp_group_tuple():
in_vrrp_group = collections.namedtuple(
'vrrp_group', 'load_balancer_id, vrrp_auth_type, vrrp_auth_pass, '
'advert_int, smtp_server, smtp_connect_timeout, '
'vrrp_group_name')
return in_vrrp_group(
vrrp_group_name='sample_loadbalancer_id_1',
load_balancer_id='sample_loadbalancer_id_1',
vrrp_auth_type='PASS',
vrrp_auth_pass='123',
advert_int='1',
smtp_server='',
smtp_connect_timeout='')
def sample_vip_tuple():
vip = collections.namedtuple('vip', 'ip_address')
return vip(ip_address='10.0.0.2')
def sample_listener_tuple(proto=None, monitor=True, alloc_default_pool=True,
persistence=True, persistence_type=None,
persistence_cookie=None, persistence_timeout=None,
persistence_granularity=None,
tls=False, sni=False, peer_port=None, topology=None,
l7=False, enabled=True, insert_headers=None,
be_proto=None, monitor_ip_port=False,
monitor_proto=None, monitor_expected_codes=None,
backup_member=False, disabled_member=False,
connection_limit=-1,
timeout_client_data=50000,
timeout_member_connect=5000,
timeout_member_data=50000,
timeout_tcp_inspect=0,
client_ca_cert=False, client_crl_cert=False,
ssl_type_l7=False, pool_cert=False,
pool_ca_cert=False, pool_crl=False,
tls_enabled=False, hm_host_http_check=False,
id='sample_listener_id_1', recursive_nest=False,
provisioning_status=constants.ACTIVE):
proto = 'HTTP' if proto is None else proto
if be_proto is None:
be_proto = 'HTTP' if proto == 'TERMINATED_HTTPS' else proto
topology = 'SINGLE' if topology is None else topology
port = '443' if proto in ['HTTPS', 'TERMINATED_HTTPS'] else '80'
peer_port = 1024 if peer_port is None else peer_port
insert_headers = insert_headers or {}
in_listener = collections.namedtuple(
'listener', 'id, project_id, protocol_port, protocol, default_pool, '
'connection_limit, tls_certificate_id, '
'sni_container_ids, default_tls_container, '
'sni_containers, load_balancer, peer_port, pools, '
'l7policies, enabled, insert_headers, timeout_client_data,'
'timeout_member_connect, timeout_member_data, '
'timeout_tcp_inspect, client_ca_tls_certificate_id, '
'client_ca_tls_certificate, client_authentication, '
'client_crl_container_id, provisioning_status')
if l7:
pools = [
sample_pool_tuple(
proto=be_proto, monitor=monitor, persistence=persistence,
persistence_type=persistence_type,
persistence_cookie=persistence_cookie,
monitor_ip_port=monitor_ip_port, monitor_proto=monitor_proto,
pool_cert=pool_cert, pool_ca_cert=pool_ca_cert,
pool_crl=pool_crl, tls_enabled=tls_enabled,
hm_host_http_check=hm_host_http_check),
sample_pool_tuple(
proto=be_proto, monitor=monitor, persistence=persistence,
persistence_type=persistence_type,
persistence_cookie=persistence_cookie, sample_pool=2,
monitor_ip_port=monitor_ip_port, monitor_proto=monitor_proto,
pool_cert=pool_cert, pool_ca_cert=pool_ca_cert,
pool_crl=pool_crl, tls_enabled=tls_enabled,
hm_host_http_check=hm_host_http_check)]
l7policies = [
sample_l7policy_tuple('sample_l7policy_id_1', sample_policy=1),
sample_l7policy_tuple('sample_l7policy_id_2', sample_policy=2),
sample_l7policy_tuple('sample_l7policy_id_3', sample_policy=3),
sample_l7policy_tuple('sample_l7policy_id_4', sample_policy=4),
sample_l7policy_tuple('sample_l7policy_id_5', sample_policy=5),
sample_l7policy_tuple('sample_l7policy_id_6', sample_policy=6),
sample_l7policy_tuple('sample_l7policy_id_7', sample_policy=7)]
if ssl_type_l7:
l7policies.append(sample_l7policy_tuple(
'sample_l7policy_id_8', sample_policy=8))
else:
pools = [
sample_pool_tuple(
proto=be_proto, monitor=monitor, persistence=persistence,
persistence_type=persistence_type,
persistence_cookie=persistence_cookie,
monitor_ip_port=monitor_ip_port, monitor_proto=monitor_proto,
backup_member=backup_member, disabled_member=disabled_member,
pool_cert=pool_cert, pool_ca_cert=pool_ca_cert,
pool_crl=pool_crl, tls_enabled=tls_enabled,
hm_host_http_check=hm_host_http_check)]
l7policies = []
listener = in_listener(
id=id,
project_id='12345',
protocol_port=port,
protocol=proto,
load_balancer=sample_listener_loadbalancer_tuple(proto=proto,
topology=topology),
peer_port=peer_port,
default_pool=sample_pool_tuple(
proto=be_proto, monitor=monitor, persistence=persistence,
persistence_type=persistence_type,
persistence_cookie=persistence_cookie,
persistence_timeout=persistence_timeout,
persistence_granularity=persistence_granularity,
monitor_ip_port=monitor_ip_port,
monitor_proto=monitor_proto,
monitor_expected_codes=monitor_expected_codes,
pool_cert=pool_cert,
pool_ca_cert=pool_ca_cert,
pool_crl=pool_crl,
tls_enabled=tls_enabled,
hm_host_http_check=hm_host_http_check
) if alloc_default_pool else '',
connection_limit=connection_limit,
tls_certificate_id='cont_id_1' if tls else '',
sni_container_ids=['cont_id_2', 'cont_id_3'] if sni else [],
default_tls_container=sample_tls_container_tuple(
id='cont_id_1', certificate=sample_certs.X509_CERT,
private_key=sample_certs.X509_CERT_KEY,
intermediates=sample_certs.X509_IMDS_LIST,
primary_cn=sample_certs.X509_CERT_CN
) if tls else '',
sni_containers=[
sample_tls_sni_container_tuple(
tls_container_id='cont_id_2',
tls_container=sample_tls_container_tuple(
id='cont_id_2', certificate=sample_certs.X509_CERT_2,
private_key=sample_certs.X509_CERT_KEY_2,
intermediates=sample_certs.X509_IMDS_LIST,
primary_cn=sample_certs.X509_CERT_CN_2)),
sample_tls_sni_container_tuple(
tls_container_id='cont_id_3',
tls_container=sample_tls_container_tuple(
id='cont_id_3', certificate=sample_certs.X509_CERT_3,
private_key=sample_certs.X509_CERT_KEY_3,
intermediates=sample_certs.X509_IMDS_LIST,
primary_cn=sample_certs.X509_CERT_CN_3))]
if sni else [],
pools=pools,
l7policies=l7policies,
enabled=enabled,
insert_headers=insert_headers,
timeout_client_data=timeout_client_data,
timeout_member_connect=timeout_member_connect,
timeout_member_data=timeout_member_data,
timeout_tcp_inspect=timeout_tcp_inspect,
client_ca_tls_certificate_id='cont_id_ca' if client_ca_cert else '',
client_ca_tls_certificate=sample_tls_container_tuple(
id='cont_id_ca', certificate=sample_certs.X509_CA_CERT,
primary_cn=sample_certs.X509_CA_CERT_CN
) if client_ca_cert else '',
client_authentication=(
constants.CLIENT_AUTH_MANDATORY if client_ca_cert else
constants.CLIENT_AUTH_NONE),
client_crl_container_id='cont_id_crl' if client_crl_cert else '',
provisioning_status=provisioning_status,
)
if recursive_nest:
listener.load_balancer.listeners.append(listener)
return listener
def sample_tls_sni_container_tuple(tls_container_id=None, tls_container=None):
sc = collections.namedtuple('sni_container', 'tls_container_id, '
'tls_container')
return sc(tls_container_id=tls_container_id, tls_container=tls_container)
def sample_tls_sni_containers_tuple(tls_container_id=None, tls_container=None):
sc = collections.namedtuple('sni_containers', 'tls_container_id, '
'tls_container')
return [sc(tls_container_id=tls_container_id, tls_container=tls_container)]
def sample_tls_container_tuple(id='cont_id_1', certificate=None,
private_key=None, intermediates=None,
primary_cn=None):
sc = collections.namedtuple(
'tls_container',
'id, certificate, private_key, intermediates, primary_cn')
return sc(id=id, certificate=certificate, private_key=private_key,
intermediates=intermediates or [], primary_cn=primary_cn)
def sample_pool_tuple(proto=None, monitor=True, persistence=True,
persistence_type=None, persistence_cookie=None,
persistence_timeout=None, persistence_granularity=None,
sample_pool=1, monitor_ip_port=False,
monitor_proto=None, monitor_expected_codes=None,
backup_member=False,
disabled_member=False, has_http_reuse=True,
pool_cert=False, pool_ca_cert=False, pool_crl=False,
tls_enabled=False, hm_host_http_check=False,
provisioning_status=constants.ACTIVE):
proto = 'HTTP' if proto is None else proto
monitor_proto = proto if monitor_proto is None else monitor_proto
in_pool = collections.namedtuple(
'pool', 'id, protocol, lb_algorithm, members, health_monitor, '
'session_persistence, enabled, operating_status, '
'tls_certificate_id, ca_tls_certificate_id, '
'crl_container_id, tls_enabled, provisioning_status, ' +
constants.HTTP_REUSE)
if (proto == constants.PROTOCOL_UDP and
persistence_type == constants.SESSION_PERSISTENCE_SOURCE_IP):
kwargs = {'persistence_type': persistence_type,
'persistence_timeout': persistence_timeout,
'persistence_granularity': persistence_granularity}
else:
kwargs = {'persistence_type': persistence_type,
'persistence_cookie': persistence_cookie}
persis = sample_session_persistence_tuple(**kwargs)
mon = None
if sample_pool == 1:
id = 'sample_pool_id_1'
members = [sample_member_tuple('sample_member_id_1', '10.0.0.99',
monitor_ip_port=monitor_ip_port),
sample_member_tuple('sample_member_id_2', '10.0.0.98',
monitor_ip_port=monitor_ip_port,
backup=backup_member,
enabled=not disabled_member)]
if monitor is True:
mon = sample_health_monitor_tuple(
proto=monitor_proto,
host_http_check=hm_host_http_check,
expected_codes=monitor_expected_codes)
elif sample_pool == 2:
id = 'sample_pool_id_2'
members = [sample_member_tuple('sample_member_id_3', '10.0.0.97',
monitor_ip_port=monitor_ip_port)]
if monitor is True:
mon = sample_health_monitor_tuple(
proto=monitor_proto, sample_hm=2,
host_http_check=hm_host_http_check,
expected_codes=monitor_expected_codes)
return in_pool(
id=id,
protocol=proto,
lb_algorithm='ROUND_ROBIN',
members=members,
health_monitor=mon,
session_persistence=persis if persistence is True else None,
enabled=True,
operating_status='ACTIVE', has_http_reuse=has_http_reuse,
tls_certificate_id='pool_cont_1' if pool_cert else None,
ca_tls_certificate_id='pool_ca_1' if pool_ca_cert else None,
crl_container_id='pool_crl' if pool_crl else None,
tls_enabled=tls_enabled, provisioning_status=provisioning_status)
def sample_member_tuple(id, ip, enabled=True,
operating_status=constants.ACTIVE,
provisioning_status=constants.ACTIVE,
monitor_ip_port=False, backup=False):
in_member = collections.namedtuple('member',
'id, ip_address, protocol_port, '
'weight, subnet_id, '
'enabled, operating_status, '
'monitor_address, monitor_port, '
'backup, provisioning_status')
monitor_address = '192.168.1.1' if monitor_ip_port else None
monitor_port = 9000 if monitor_ip_port else None
return in_member(
id=id,
ip_address=ip,
protocol_port=82,
weight=13,
subnet_id='10.0.0.1/24',
enabled=enabled,
operating_status=operating_status,
monitor_address=monitor_address,
monitor_port=monitor_port,
backup=backup, provisioning_status=provisioning_status)
def sample_session_persistence_tuple(persistence_type=None,
persistence_cookie=None,
persistence_timeout=None,
persistence_granularity=None):
spersistence = collections.namedtuple('SessionPersistence',
'type, cookie_name, '
'persistence_timeout, '
'persistence_granularity')
pt = 'HTTP_COOKIE' if persistence_type is None else persistence_type
return spersistence(type=pt,
cookie_name=persistence_cookie,
persistence_timeout=persistence_timeout,
persistence_granularity=persistence_granularity)
def sample_health_monitor_tuple(proto='HTTP', sample_hm=1,
host_http_check=False, expected_codes=None,
provisioning_status=constants.ACTIVE):
proto = 'HTTP' if proto == 'TERMINATED_HTTPS' else proto
monitor = collections.namedtuple(
'monitor', 'id, type, delay, timeout, fall_threshold, rise_threshold,'
'http_method, url_path, expected_codes, enabled, '
'check_script_path, http_version, domain_name, '
'provisioning_status')
if sample_hm == 1:
id = 'sample_monitor_id_1'
url_path = '/index.html'
elif sample_hm == 2:
id = 'sample_monitor_id_2'
url_path = '/healthmon.html'
kwargs = {
'id': id,
'type': proto,
'delay': 30,
'timeout': 31,
'fall_threshold': 3,
'rise_threshold': 2,
'http_method': 'GET',
'url_path': url_path,
'expected_codes': '418',
'enabled': True,
'provisioning_status': provisioning_status,
}
if host_http_check:
kwargs.update({'http_version': 1.1, 'domain_name': 'testlab.com'})
else:
kwargs.update({'http_version': 1.0, 'domain_name': None})
if expected_codes:
kwargs.update({'expected_codes': expected_codes})
if proto == constants.HEALTH_MONITOR_UDP_CONNECT:
kwargs['check_script_path'] = (CONF.haproxy_amphora.base_path +
'lvs/check/' + 'udp_check.sh')
else:
kwargs['check_script_path'] = None
return monitor(**kwargs)
def sample_l7policy_tuple(id,
action=constants.L7POLICY_ACTION_REJECT,
redirect_pool=None, redirect_url=None,
redirect_prefix=None,
enabled=True, redirect_http_code=302,
sample_policy=1,
provisioning_status=constants.ACTIVE):
in_l7policy = collections.namedtuple('l7policy',
'id, action, redirect_pool, '
'redirect_url, redirect_prefix, '
'l7rules, enabled, '
'redirect_http_code, '
'provisioning_status')
l7rules = []
if sample_policy == 1:
action = constants.L7POLICY_ACTION_REDIRECT_TO_POOL
redirect_pool = sample_pool_tuple(sample_pool=2)
l7rules = [sample_l7rule_tuple('sample_l7rule_id_1')]
elif sample_policy == 2:
action = constants.L7POLICY_ACTION_REDIRECT_TO_URL
redirect_url = 'http://www.example.com'
l7rules = [sample_l7rule_tuple('sample_l7rule_id_2', sample_rule=2),
sample_l7rule_tuple('sample_l7rule_id_3', sample_rule=3)]
elif sample_policy == 3:
action = constants.L7POLICY_ACTION_REJECT
l7rules = [sample_l7rule_tuple('sample_l7rule_id_4', sample_rule=4),
sample_l7rule_tuple('sample_l7rule_id_5', sample_rule=5)]
elif sample_policy == 4:
action = constants.L7POLICY_ACTION_REJECT
elif sample_policy == 5:
action = constants.L7POLICY_ACTION_REJECT
enabled = False
l7rules = [sample_l7rule_tuple('sample_l7rule_id_5', sample_rule=5)]
elif sample_policy == 6:
action = constants.L7POLICY_ACTION_REJECT
l7rules = [sample_l7rule_tuple('sample_l7rule_id_6', sample_rule=6)]
elif sample_policy == 7:
action = constants.L7POLICY_ACTION_REDIRECT_PREFIX
redirect_prefix = 'https://example.com'
l7rules = [sample_l7rule_tuple('sample_l7rule_id_2', sample_rule=2),
sample_l7rule_tuple('sample_l7rule_id_3', sample_rule=3)]
elif sample_policy == 8:
action = constants.L7POLICY_ACTION_REDIRECT_TO_URL
redirect_url = 'http://www.ssl-type-l7rule-test.com'
l7rules = [sample_l7rule_tuple('sample_l7rule_id_7', sample_rule=7),
sample_l7rule_tuple('sample_l7rule_id_8', sample_rule=8),
sample_l7rule_tuple('sample_l7rule_id_9', sample_rule=9),
sample_l7rule_tuple('sample_l7rule_id_10', sample_rule=10),
sample_l7rule_tuple('sample_l7rule_id_11', sample_rule=11)]
return in_l7policy(
id=id,
action=action,
redirect_pool=redirect_pool,
redirect_url=redirect_url,
redirect_prefix=redirect_prefix,
l7rules=l7rules,
enabled=enabled,
redirect_http_code=redirect_http_code
if (action in [constants.L7POLICY_ACTION_REDIRECT_TO_URL,
constants.L7POLICY_ACTION_REDIRECT_PREFIX] and
redirect_http_code) else None,
provisioning_status=provisioning_status)
def sample_l7rule_tuple(id,
type=constants.L7RULE_TYPE_PATH,
compare_type=constants.L7RULE_COMPARE_TYPE_STARTS_WITH,
key=None, value='/api',
invert=False, enabled=True,
sample_rule=1, provisioning_status=constants.ACTIVE):
in_l7rule = collections.namedtuple('l7rule',
'id, type, compare_type, '
'key, value, invert, enabled, '
'provisioning_status')
if sample_rule == 2:
type = constants.L7RULE_TYPE_HEADER
compare_type = constants.L7RULE_COMPARE_TYPE_CONTAINS
key = 'Some-header'
value = 'This string\\ with stuff'
invert = True
enabled = True
if sample_rule == 3:
type = constants.L7RULE_TYPE_COOKIE
compare_type = constants.L7RULE_COMPARE_TYPE_REGEX
key = 'some-cookie'
value = 'this.*|that'
invert = False
enabled = True
if sample_rule == 4:
type = constants.L7RULE_TYPE_FILE_TYPE
compare_type = constants.L7RULE_COMPARE_TYPE_EQUAL_TO
key = None
value = 'jpg'
invert = False
enabled = True
if sample_rule == 5:
type = constants.L7RULE_TYPE_HOST_NAME
compare_type = constants.L7RULE_COMPARE_TYPE_ENDS_WITH
key = None
value = '.example.com'
invert = False
enabled = True
if sample_rule == 6:
type = constants.L7RULE_TYPE_HOST_NAME
compare_type = constants.L7RULE_COMPARE_TYPE_ENDS_WITH
key = None
value = '.example.com'
invert = False
enabled = False
if sample_rule == 7:
type = constants.L7RULE_TYPE_SSL_CONN_HAS_CERT
compare_type = constants.L7RULE_COMPARE_TYPE_EQUAL_TO
key = None
value = 'tRuE'
invert = False
enabled = True
if sample_rule == 8:
type = constants.L7RULE_TYPE_SSL_VERIFY_RESULT
compare_type = constants.L7RULE_COMPARE_TYPE_EQUAL_TO
key = None
value = '1'
invert = True
enabled = True
if sample_rule == 9:
type = constants.L7RULE_TYPE_SSL_DN_FIELD
compare_type = constants.L7RULE_COMPARE_TYPE_REGEX
key = 'STREET'
value = r'^STREET.*NO\.$'
invert = True
enabled = True
if sample_rule == 10:
type = constants.L7RULE_TYPE_SSL_DN_FIELD
compare_type = constants.L7RULE_COMPARE_TYPE_STARTS_WITH
key = 'OU-3'
value = 'Orgnization Bala'
invert = True
enabled = True
return in_l7rule(
id=id,
type=type,
compare_type=compare_type,
key=key,
value=value,
invert=invert,
enabled=enabled, provisioning_status=provisioning_status)
def sample_base_expected_config(frontend=None, logging=None, backend=None,
peers=None, global_opts=None, defaults=None):
if frontend is None:
frontend = ("frontend sample_listener_id_1\n"
" maxconn {maxconn}\n"
" bind 10.0.0.2:80\n"
" mode http\n"
" default_backend sample_pool_id_1\n"
" timeout client 50000\n").format(
maxconn=constants.HAPROXY_MAX_MAXCONN)
if logging is None:
logging = (" log-format 12345\\ sample_loadbalancer_id_1\\ %f\\ "
"%ci\\ %cp\\ %t\\ %{+Q}r\\ %ST\\ %B\\ %U\\ "
"%[ssl_c_verify]\\ %{+Q}[ssl_c_s_dn]\\ %b\\ %s\\ %Tt\\ "
"%tsc\n\n")
if backend is None:
backend = ("backend sample_pool_id_1\n"
" mode http\n"
" balance roundrobin\n"
" cookie SRV insert indirect nocache\n"
" timeout check 31s\n"
" option httpchk GET /index.html HTTP/1.0\\r\\n\n"
" http-check expect rstatus 418\n"
" fullconn {maxconn}\n"
" option allbackups\n"
" timeout connect 5000\n"
" timeout server 50000\n"
" server sample_member_id_1 10.0.0.99:82 weight 13 "
"check inter 30s fall 3 rise 2 cookie sample_member_id_1\n"
" server sample_member_id_2 10.0.0.98:82 weight 13 "
"check inter 30s fall 3 rise 2 cookie sample_member_id_2\n"
"\n").format(maxconn=constants.HAPROXY_MAX_MAXCONN)
if peers is None:
peers = "\n\n"
if global_opts is None:
global_opts = " maxconn {maxconn}\n\n".format(
maxconn=constants.HAPROXY_MAX_MAXCONN)
if defaults is None:
defaults = ("defaults\n"
" log global\n"
" retries 3\n"
" option redispatch\n"
" option splice-request\n"
" option splice-response\n"
" option http-keep-alive\n\n")
return ("# Configuration for loadbalancer sample_loadbalancer_id_1\n"
"global\n"
" daemon\n"
" user nobody\n"
" log /run/rsyslog/octavia/log local0\n"
" log /run/rsyslog/octavia/log local1 notice\n"
" stats socket /var/lib/octavia/sample_listener_id_1.sock"
" mode 0666 level user\n" +
global_opts + defaults + peers + frontend + logging + backend)
|
Big Sandy Superstore was founded in Ashland, Kentucky in 1953 by Robert Van Hoose, Sr. Fresh out of the Air Force, Mr. Van Hoose had a $1000 loan from his wife Lorna and opened his first store. With his faith in God as his foundation, Mr. Van Hoose ran his business on a very basic principle: the golden rule. He believed in giving customers the best possible value for the money spent and if there was a problem, simply follow the golden rule. This simple formula has stood the test of time as Big Sandy has grown into a regional powerhouse in the home furnishings industry and is one of the nation's top 100 furniture retailers. Now with multiple store locations and over 600 employees in Kentucky, Ohio, and West Virginia, we continue to make every decision on the golden rule principle. You are the reason we are online, so you can browse when it's convenient for you. I hope you'll feel at home here and look forward to seeing you in one of our stores real soon.
|
import pylab
WORDLIST_FILENAME = "words.txt"
def loadWords():
"""
Returns a list of valid words. Words are strings of lowercase letters.
Depending on the size of the word list, this function may
take a while to finish.
"""
print "Loading word list from file..."
# inFile: file
inFile = open(WORDLIST_FILENAME, 'r', 0)
# wordList: list of strings
wordList = []
for line in inFile:
wordList.append(line.strip().lower())
print " ", len(wordList), "words loaded."
return wordList
def plotVowelProportionHistogram(wordList, numBins=15):
"""
Plots a histogram of the proportion of vowels in each word in wordList
using the specified number of bins in numBins
"""
vowels = [c for c in 'aeiou']
fracs = []
for word in wordList:
frac = sum(1 if c in vowels else 0 for c in word)/float(len(word))
fracs.append(frac)
pylab.figure()
pylab.hist(fracs, bins=numBins)
pylab.title('Histogram of the proportion of vowels in each word')
pylab.xlabel('Vowel proportions')
pylab.ylabel('Number of words with the vowel proportion')
pylab.show()
if __name__ == '__main__':
wordList = loadWords()
plotVowelProportionHistogram(wordList)
|
It’s wintertime in NYC, and whether you are meeting up with friends and family or perusing the holiday markets, you are probably trying to figure out whether and how to bring your little one along. Let’s talk about some of the ways to tote around our babes around the city comfortably and safely.
Carriers are a great way to keep baby snuggled up as you navigate the often-packed streets of the city. There’s no need to negotiate a stroller through a crowd if baby is tucked in close to you.
There are many different carriers on the market, and the most popular ones generally fall into three categories: wraps, slings, and structured harnesses. There are wraps that are “pre-wrapped”, where loops of fabric are already stitched together, and there are wraps that are a longer swath of fabric that you would loop and tie together yourself. A sling is similar to a wrap but is a loop of fabric that usually has rings to help with fit and adjustment. A structured harness has shoulder straps to wear baby like a backpack.
Whichever type of carrier you choose, remember to keep baby in an ergonomic position, meaning that the hips should be supported from the hips to the knees to make sure that baby’s hips develop properly (https://hipdysplasia.org/developmental-dysplasia-of-the-hip/prevention/baby-carriers-seats-and-other-equipment/). Avoid positions where the hips dangle, or where they are horizontal like in a cradled position in a sling. A cradled position in a sling can also tip the chin downwards and pinch off baby’s airway, so in general, it is not a recommended position to carry a baby in. Make sure you also follow the manufacturer’s recommended guidelines for each product, including when it is okay to turn a baby forward or to wear a baby on the back for a structured carrier. Some structured carriers also require an infant insert.
Strollers also come in many types. Some are more lightweight, so they may be easier to carry up and down subway stairs. Some are more substantial, with an undercarriage for holiday groceries and gifts.
Like carriers, different strollers are appropriate for different applications and ages. A full-size stroller can often be used from newborn and up with a bassinet attachment option, and you may be able to also attach a car seat depending on the model. An umbrella stroller, while it is more compact and space-saving, would be used for older infants and children. A jogging stroller for parents on the go would also be more appropriate for an older baby.
Remember that if your baby is not yet sitting up with good head control, he or she is not yet developmentally ready for a seat that is more upright and requires the tone and coordination for this position. A baby generally begins to sit up with assistance then independently between 4 to 6 months. Conversely, a bassinet stroller attachment is not safe for a baby who can sit up and possibly pull themselves out of one. You may be thinking of draping a blanket over your baby’s stroller to cover him or her while walking through a crowd. It may be safe to do so with a light breathable blanket, like a muslin blanket, and for very brief periods of time. Keeping a blanket over the stroller means you cannot see the baby and might also impede ventilation, so be mindful if you are doing this.
Car seats, when used appropriately, can keep baby safe during holiday travels. During the cold winter months, make sure there are no extra layers from a thick puffy jacket or footmuff between the car seat or car seat straps/harness and the baby. There are certain coats and footmuffs that can be safe, but check that there is no slack in the seat belts when they are used in the car seat (https://thecarseatlady.com/warmandsafe/). The newest recommendations from the American Academy of Pediatrics (AAP) also reinforces that riding rear-facing is the safest position for young children in the car, even to 4 years old (https://www.healthychildren.org/English/safety-prevention/on-the-go/Pages/Car-Safety-Seats-Information-for-Families.aspx). There are also expiration dates for car seats since they can become warped, brittle, or technologically outdated over time. Check your car seat for its expiration date, and for the range of safe weights and heights that it can accommodate.
Additionally, note that some ride-share and private taxi companies can provide a car seat. However, this seat is typically for children older than 1 and forward-facing, which is not recommended by the AAP. Yellow cabs are exempt from requiring car seats for young passengers and allows for children up to 7 years old to sit on an adult’s lap, which is also against AAP’s safety guidelines.
Riding subways, buses, and trains are another option for our littlest urbanites. The MTA offers some guidelines when traveling with baby, including safety at stations and inside their vehicles (http://web.mta.info/safety/). Vigilance is important to ensure young curious children are kept close and supervised. Infants and children may also be sensitive to noise, and continued exposure to loud sounds like rumbling subways could impact hearing (https://www.healthychildren.org/English/health-issues/conditions/ear-nose-throat/Pages/Tips-Preserve-Childs-Hearing-Holidays.aspx). Ear muffs or other hearing protection devices may be appropriate to filter out louder sounds. Packed subways and buses may also mean close proximity to sick individuals during cold and flu season, so commute mindfully.
|
import binascii
def fingerprint(tokens, dict_limit=25, token_limit=None, debug=False):
'''
Paper: "Locality Sensitive Hashing for Scalable Structural
Classification and Clustering of Web Documents"
Hachenberg, Christian; Gottron, Thomas (2013)
https://west.uni-koblenz.de/de/forschung/datensaetze/template-detection
https://dl.acm.org/citation.cfm?id=2505673
'''
d = {}
dict_entry_id = 1
buff = tuple()
prefix_id = 0
output = []
for cnt, tok in enumerate(tokens, start=1):
if token_limit is not None and cnt > token_limit:
break
token = (tok,)
buffer_token = buff + token
if buffer_token in d:
buff = buffer_token
else:
prefix_id = d.get(buff)
if prefix_id is not None:
output.append(prefix_id)
else:
output.append(0)
d[buffer_token] = dict_entry_id
dict_entry_id += 1
buff = tuple()
if dict_entry_id > dict_limit:
break
return output
def hexfp(fingerprint):
_bytes = bytearray(fingerprint)
return binascii.hexlify(_bytes).decode('ascii')
|
If you are dissatisfied at any time during the first 30 days after purchase, simply ask for a full refund. You will get your money back. That’s a firm promise and commitment.
Email your refund requests to admin@tigercontent.com. Please include the words “Refund Request” in the subject line of your email.
Welcome! You’re likely here because you need well-written web content to attract customers and sell your products or services. There are thousands of service providers for you to choose from but you really only need one and you’d like to find that company sooner rather than later.
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Tests for RMSProp optimizer."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
from tensorflow.compiler.tests import xla_test
from tensorflow.python.framework import constant_op
from tensorflow.python.ops import resource_variable_ops
from tensorflow.python.ops import variables
from tensorflow.python.platform import test
from tensorflow.python.training import rmsprop
class RmspropTest(xla_test.XLATestCase):
def _rmsprop_update_numpy(self,
var,
g,
mg,
rms,
mom,
lr,
decay=0.9,
momentum=0.0,
epsilon=1e-10,
centered=False):
rms_t = rms * decay + (1 - decay) * g * g
denom_t = rms_t + epsilon
if centered:
mg_t = mg * decay + (1 - decay) * g
denom_t -= mg_t * mg_t
else:
mg_t = mg
mom_t = momentum * mom + lr * g / np.sqrt(denom_t, dtype=denom_t.dtype)
var_t = var - mom_t
return var_t, mg_t, rms_t, mom_t
def testBasic(self):
for dtype in self.float_types:
for centered in [False, True]:
with self.test_session(), self.test_scope():
# Initialize variables for numpy implementation.
var0_np = np.array([1.0, 2.0], dtype=dtype)
grads0_np = np.array([0.1, 0.1], dtype=dtype)
var1_np = np.array([3.0, 4.0], dtype=dtype)
grads1_np = np.array([0.01, 0.01], dtype=dtype)
mg0_np = np.array([0.0, 0.0], dtype=dtype)
mg1_np = np.array([0.0, 0.0], dtype=dtype)
rms0_np = np.array([1.0, 1.0], dtype=dtype)
rms1_np = np.array([1.0, 1.0], dtype=dtype)
mom0_np = np.array([0.0, 0.0], dtype=dtype)
mom1_np = np.array([0.0, 0.0], dtype=dtype)
var0 = resource_variable_ops.ResourceVariable(var0_np)
var1 = resource_variable_ops.ResourceVariable(var1_np)
grads0 = constant_op.constant(grads0_np)
grads1 = constant_op.constant(grads1_np)
learning_rate = 3.0
rms_opt = rmsprop.RMSPropOptimizer(learning_rate, centered=centered)
rms_update = rms_opt.apply_gradients(
zip([grads0, grads1], [var0, var1]))
variables.global_variables_initializer().run()
mg0 = rms_opt.get_slot(var0, "mg")
self.assertEqual(mg0 is not None, centered)
mg1 = rms_opt.get_slot(var1, "mg")
self.assertEqual(mg1 is not None, centered)
rms0 = rms_opt.get_slot(var0, "rms")
self.assertTrue(rms0 is not None)
rms1 = rms_opt.get_slot(var1, "rms")
self.assertTrue(rms1 is not None)
mom0 = rms_opt.get_slot(var0, "momentum")
self.assertTrue(mom0 is not None)
mom1 = rms_opt.get_slot(var1, "momentum")
self.assertTrue(mom1 is not None)
# Fetch params to validate initial values
self.assertAllClose([1.0, 2.0], var0.eval())
self.assertAllClose([3.0, 4.0], var1.eval())
# Run 3 steps of RMSProp
for _ in range(3):
rms_update.run()
var0_np, mg0_np, rms0_np, mom0_np = self._rmsprop_update_numpy(
var0_np,
grads0_np,
mg0_np,
rms0_np,
mom0_np,
learning_rate,
centered=centered)
var1_np, mg1_np, rms1_np, mom1_np = self._rmsprop_update_numpy(
var1_np,
grads1_np,
mg1_np,
rms1_np,
mom1_np,
learning_rate,
centered=centered)
# Validate updated params
if centered:
self.assertAllCloseAccordingToType(mg0_np, mg0.eval())
self.assertAllCloseAccordingToType(mg1_np, mg1.eval())
self.assertAllCloseAccordingToType(rms0_np, rms0.eval())
self.assertAllCloseAccordingToType(rms1_np, rms1.eval())
self.assertAllCloseAccordingToType(mom0_np, mom0.eval())
self.assertAllCloseAccordingToType(mom1_np, mom1.eval())
self.assertAllCloseAccordingToType(var0_np, var0.eval())
self.assertAllCloseAccordingToType(var1_np, var1.eval())
if __name__ == "__main__":
test.main()
|
EVOKING MEMORIES « THE BRYCE IS RIGHT!
– How the sense of smell and taste can unleash vivid memories.
Of all of our senses, smell and taste can trigger vivid emotional memories, even going so far as to making us feel like we are being transported back in time. Sight, sound, and touch are also useful, but smell and taste evokes powerful images for us. I have three personal examples that take me back in time to my youth.
The first involves the use of my taste buds. Lately I’ve taken to drinking fruit juices late at night. I have orange juice which is usually reserved for breakfast, but I also keep apple and grape juice in the fridge, along with a fruit punch, something I enjoyed in my youth. I usually opt for the diet lite versions of these products as I do not want the sugar, but they are still delicious and I like them particularly cold. When I drink them, the taste takes me back to the early 1960’s when I enjoyed such drinks in large tin cans which we would open with “church keys.” If I was lucky, I would drink from the can and distinctly remember the taste of the tin which added to its flavor. In particular, the grape drink reminds me of the cheap frozen popsicles we would enjoy during the summertime. Back then, we also poured the grape drink into a Tupperware popsicle maker and froze it. When I consume these drinks today, I am transported back for a few scant seconds where I enjoyed such heavenly drinks.
The second experience involves the use of smell. Sometimes, early in the morning, when I go to retrieve the newspaper in the driveway, the sun is just starting to peek up over the horizon and I can smell the dew on the lawn. It’s even better if the grass was freshly cut. It’s at this moment when I return to my elementary school in Connecticut where I used to ride my old reliable J.C. Higgins bicycle early to school so my friends and I could play a couple of innings of baseball before the first bell. Our parents could never understand why we wanted to go to school so early, but they chalked it up as a positive sign we liked school. Actually, it was all about baseball. As I smell the morning today, I vividly remember what route I would take to school, how fast I would go on my bike, ever mindful not to let my books and baseball mitt pop out of my front basket.
The third experience also involves smell. You have heard me talk about my fly-fishing excursions in the past, particularly in North Carolina. There is something inspirational about working a stream, something rather peaceful and therapeutic. In my case, when I enter a babbling brook, I am again transported back to the Connecticut of my youth, where we would fish in streams with simple rods and reels, using stringers to secure our catches, and how to clean the fish afterwards. Near to the streams would be fruit trees and we would enjoy apples and peaches. We spent a lot of time in the streams, fishing and swimming, and building forts along the way to stay out of our parents’ eyes. It was a glorious time.
Our sense of smell and taste are powerful and a link to our past. It reminds us of the kitchens of our grandparents, certain restaurants, and of events in the past, small or epochal. It’s evoked by such simple things as aftershave lotion, burning leaves, pipe tobacco, cooking with charcoal brickets, bacon, burned toast, etc., and suddenly we are transported back to our youth. Sadly, as strong as these memories are, they last but a few precious seconds, which is long enough to remind me how lucky I was to enjoy such experiences.
NEXT UP: OUR SENSE OF PROFESSIONALISM – It’s about substance versus facade.
This entry was posted on October 16, 2015 at 6:00 am and is filed under Life. Tagged: EVOKING MEMORIES, Florida, palm harbor, The Bryce is Right, tim bryce. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.
|
# (c) 2013-2018 Sebastian Humenda
# This code is licenced under the terms of the LGPL-3+, see the file COPYING for
# more details.
"""Top-level API to parse input documents.
The main point of the parsing is to extract formulas from a given input
document, while preserving the remaining formatting.
The returned parsed document structure is highly dependent on the input format
and hence document in their respective functions."""
import enum
import json
import sys
from . import htmlhandling
from . import pandoc
ParseException = (
htmlhandling.ParseException
) # re-export for consistent API from outside
class Format(enum.Enum):
HTML = 0
# while this is json, we never know what other applications might decide to
# use json as their intermediate representation ;)
PANDOCFILTER = 1
@staticmethod
def parse(string):
string = string.lower()
if string == "html":
return Format.HTML
elif string == "pandocfilter":
return Format.PANDOCFILTER
else:
raise ValueError("unrecognised format: %s" % string)
def parse_document(doc, fmt):
"""This function parses an input document (string or bytes) with the given
format specifier. For HTML, the returned "parsed" document is a list of
chunks, where raw chunks are just plain HTML instructions and data and
formula chunks are parsed from the '<eq/>' tags.
If the input document is a pandoc AST, the formulas will be extracted and
the document is a tuple of (pandoc AST, formulas).
:param doc input of bytes or string to parse
:param fmt either the enum type `Format` or a string understood by Format.parse
:return (encoding, document) (a tuple)"""
if isinstance(fmt, str):
fmt = Format.parse(fmt)
encoding = None
if fmt == Format.HTML:
docparser = htmlhandling.EqnParser()
docparser.feed(doc)
encoding = docparser.get_encoding()
encoding = encoding if encoding else "utf-8"
doc = docparser.get_data()
elif fmt == Format.PANDOCFILTER:
if isinstance(doc, bytes):
doc = doc.decode(sys.getdefaultencoding())
ast = json.loads(doc)
formulas = pandoc.extract_formulas(ast)
doc = (ast, formulas) # ← see doc string
if not encoding:
encoding = sys.getdefaultencoding()
return encoding, doc
|
Blocked nose which makes it difficult to breathe through the nostrils and you may need to breathe through the mouth. Concurrently, blocked nose is known to alter the tone and timber of the voice too. Concurrently, blocked nose is known to alter the tone and timber of the voice too.
Stuffy nose is a common condition which is caused due to common cold, flu, and the blocked nasal sinuses. Though not a serious problem, stuffy nose can be irritating and make you uncomfortable. Child can become very irritable due to stuffy nose as he is not able to breathe through his nose.
|
# -*- coding:utf8 -*-
# ####################### BEGIN LICENSE BLOCK ########################
# This library is free software; you can redistribute it and/or
# modify it under the terms of the GNU Lesser General Public
# License as published by the Free Software Foundation; either
# version 2.1 of the License, or (at your option) any later version.
#
# This library is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
# Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this library; if not, write to the Free Software
# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
# 02110-1301 USA
# ######################## END LICENSE BLOCK #########################
"""
module for some argument cheking
"""
from json import loads
from requests import head
from .exceptions import (MallformedResize, UnsupportedRotation,
UnsupportedFormat, UnkownError, ServerError,
EmptyResponse, PercentageOutOfRange)
from .globals import ALLOWED_ROTATION, USER_AGENT
def check_rotation(rotation):
"""checks rotation parameter if illegal value raises exception"""
if rotation not in ALLOWED_ROTATION:
allowed_rotation = ', '.join(ALLOWED_ROTATION)
raise UnsupportedRotation('Rotation %s is not allwoed. Allowed are %s'
% (rotation, allowed_rotation))
def check_resize(resize):
"""checks resize parameter if illegal value raises exception"""
if resize is None:
return
resize = resize.lower().strip()
if 'x' in resize:
tmp = resize.lower().split('x')
tmp = [x.strip() for x in resize.split('x')]
if len(tmp) == 2 and tmp[0].isdigit() and tmp[1].isdigit():
return
elif '%' in resize:
tmp = resize.split('%')[0]
if tmp.isnumeric():
tmp = int(tmp)
if 1 <= tmp <= 1000:
return
else:
raise PercentageOutOfRange("percentage must be between 1 and 1000")
raise MallformedResize('Resize value "%s" is mallformed. '
'Desired format is: {width}x{height} or {percentage}%%' % resize)
def check_noexif(noexif):
"""checks if noexif parameter is boolean"""
if not isinstance(noexif, bool):
raise TypeError('noexif must be boolean')
def check_callback(callback):
"""checks if callback is callable"""
if not callable(callback) and callback is not None:
raise TypeError('%s is not callable' % callback)
def check_response(response):
"""
checks the response if the server returned an error raises an exception.
"""
if response.status_code < 200 or response.status_code > 300:
raise ServerError('API requests returned with error: %s'
% response.status_code)
try:
response_text = loads(response.text)
except ValueError:
raise ServerError('The API did not returned a JSON string.')
if not response_text:
raise EmptyResponse()
if 'failure' in response_text:
if response_text['failure'] == 'Falscher Dateityp':
raise UnsupportedFormat('Please look at picflash.org '
'witch formats are supported')
else:
raise UnkownError(response_text['failure'])
def check_if_redirect(url):
"""
checks if server redirects url
"""
response = head(url, headers={'User-Agent': USER_AGENT})
if response.status_code >= 300 and response.status_code < 400:
return response.headers['location']
return None
|
The third of our students to join our UK Scholarship Fund is Lourdina Baboun from Bethlehem. This is yet another interview that will tell you a bit more about our musicians.
Palmusic UK: Tell us a little bit about you, where you come from and what you do?
Lourdina Baboun [LB]: My name is Lourdina Baboun, and I am a Palestinian violinist from Bethlehem. I have studied at the Edward said National Conservatory of Music (ESNCM) in Bethlehem for the past 8 years where I also obtained my degree. Afterwards I moved to France to continue my studies as a violinist. At the moment, I am at the Royal Birmingham Conservatoire doing BM studies.
Palmusic UK: Why did you decide to apply for this scholarship?
the expenses here in England are very high, and I needed support to help me achieve my goal.
Palmusic UK: How did you learn about Palmusic?
Lourdina Baboun [LB]: I have played at a fundraising concert for the ESNCM in London, and Palmusic UK had organized the concert and all aspects relating to the event. That’s how I learned about the scope of work Palmusic UK do.
Palmusic UK: Is this your first time away from home?
Lourdina Baboun [LB]: No, I have been away before when I moved to France to study for my BA degree there. I studied for three years studies in France.
Palmusic UK: What are the things that you are looking forward to the most?
Lourdina Baboun [LB]: Certainly becoming a better violinist and presenting myself as a Palestinian violinist from Bethlehem. I’d like to show people that when you believe that “Where there’s will there’s a way” there truly is a way.
Palmusic UK: What do you think will be the biggest challenge for you being away from home?
Lourdina Baboun [LB]: Building a stronger personality is always key. When you build it, you won’t have challenges away from home. On the other hand, the challenges of studies and what they entail will be hard, because I will be alone and I will need to solve everything on my own.
Palmusic UK: Do you think that this experience will have a profound effect on your life?
Lourdina Baboun [LB]: It will, and it has already happened when we found out that our conservatory is now a ROYAL conservatoire. There will be a lot of hard work on our way, and I am sure through the years we will attend many masterclasses and workshops that will help me become a better violinist. I know that being here is a big responsibility for myself and then for everybody who has helped me to get to where I am now.
Palmusic UK: In what ways do you think will this experience influence your development?
Lourdina Baboun [LB]: The opportunities that I have in this conservatory are very important and special, making the most of these experiences will no doubt help me develop.
Palmusic UK: What are your hopes and wishes for the future in terms of your career?
Lourdina Baboun [LB]: Everybody has a vision and I have one as well, but in order to achieve what I want I first need to know well what I should be doing this year; set targets that I want to achieve and to improve. When I attain that throughout the time I spend here in England and when I do it the right way, I will be ready to say and do what I want.
|
# Copyright 2017 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import functools
import os
import six
from tensorflow.contrib.eager.python import checkpointable_utils
from tensorflow.python.client import session as session_lib
from tensorflow.python.eager import backprop
from tensorflow.python.eager import context
from tensorflow.python.eager import test
from tensorflow.python.framework import constant_op
from tensorflow.python.framework import dtypes
from tensorflow.python.framework import ops
from tensorflow.python.framework import test_util
from tensorflow.python.keras._impl.keras.engine import training
from tensorflow.python.layers import core
from tensorflow.python.ops import control_flow_ops
from tensorflow.python.ops import init_ops
from tensorflow.python.ops import resource_variable_ops
from tensorflow.python.ops import state_ops
from tensorflow.python.ops import template
from tensorflow.python.ops import variable_scope
from tensorflow.python.training import adam
from tensorflow.python.training import checkpointable
from tensorflow.python.training import saver as core_saver
from tensorflow.python.training import training_util
class NonLayerCheckpointable(checkpointable.Checkpointable):
def __init__(self):
super(NonLayerCheckpointable, self).__init__()
self.a_variable = checkpointable_utils.add_variable(
self, name="a_variable", shape=[])
# pylint: disable=not-callable
class MyModel(training.Model):
"""A concrete Model for testing."""
def __init__(self):
super(MyModel, self).__init__()
self._named_dense = core.Dense(1, use_bias=True)
self._second = core.Dense(1, use_bias=False)
# We can still track Checkpointables which aren't Layers.
self._non_layer = NonLayerCheckpointable()
def call(self, values):
ret = self._second(self._named_dense(values))
return ret
class InterfaceTests(test.TestCase):
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testAddVariable(self):
obj = NonLayerCheckpointable()
with self.assertRaisesRegexp(ValueError, "do not specify shape"):
checkpointable_utils.add_variable(
obj, name="shape_specified_twice", shape=[], initializer=1)
constant_initializer = checkpointable_utils.add_variable(
obj, name="constant_initializer", initializer=1)
with variable_scope.variable_scope("some_variable_scope"):
ones_initializer = checkpointable_utils.add_variable(
obj,
name="ones_initializer",
shape=[2],
initializer=init_ops.ones_initializer(dtype=dtypes.float32))
bare_initializer = checkpointable_utils.add_variable(
obj,
name="bare_initializer",
shape=[2, 2],
dtype=dtypes.float64,
initializer=init_ops.zeros_initializer)
# Even in graph mode, there are no naming conflicts between objects, only
# naming conflicts within an object.
other_duplicate = resource_variable_ops.ResourceVariable(
name="duplicate", initial_value=1.)
duplicate = checkpointable_utils.add_variable(
obj, name="duplicate", shape=[])
with self.assertRaisesRegexp(ValueError, "'duplicate' already exists"):
checkpointable_utils.add_variable(obj, name="duplicate", shape=[])
self.evaluate(checkpointable_utils.gather_initializers(obj))
self.assertEqual("constant_initializer:0", constant_initializer.name)
self.assertEqual(1, self.evaluate(constant_initializer))
self.assertEqual("some_variable_scope/ones_initializer:0",
ones_initializer.name)
self.assertAllEqual([1, 1], self.evaluate(ones_initializer))
self.assertAllEqual([[0., 0.],
[0., 0.]], self.evaluate(bare_initializer))
self.assertEqual("a_variable:0", obj.a_variable.name)
self.assertEqual("duplicate:0", other_duplicate.name)
if context.executing_eagerly():
# When executing eagerly, there's no uniquification of variable names. The
# checkpoint name will be the same.
self.assertEqual("duplicate:0", duplicate.name)
else:
# The .name attribute may be globally influenced, but the checkpoint name
# won't be (tested below).
self.assertEqual("duplicate_1:0", duplicate.name)
named_variables, _ = checkpointable_utils._serialize_object_graph(obj)
expected_checkpoint_names = (
"a_variable/.ATTRIBUTES/VARIABLE_VALUE",
"bare_initializer/.ATTRIBUTES/VARIABLE_VALUE",
"constant_initializer/.ATTRIBUTES/VARIABLE_VALUE",
"duplicate/.ATTRIBUTES/VARIABLE_VALUE",
"ones_initializer/.ATTRIBUTES/VARIABLE_VALUE",
)
six.assertCountEqual(
self, expected_checkpoint_names, named_variables.keys())
def testInitNotCalled(self):
class NoInit(checkpointable.Checkpointable):
def __init__(self):
pass
# __init__ for Checkpointable will be called implicitly.
checkpointable_utils.add_variable(NoInit(), "var", shape=[])
def testShapeDtype(self):
root = checkpointable.Checkpointable()
v1 = checkpointable_utils.add_variable(
root, name="v1", initializer=3., dtype=dtypes.float64)
self.assertEqual(dtypes.float64, v1.dtype)
v2 = checkpointable_utils.add_variable(
root,
name="v2",
shape=[3],
initializer=init_ops.ones_initializer,
dtype=dtypes.float64)
self.assertEqual(dtypes.float64, v2.dtype)
self.assertAllEqual([1., 1., 1.], self.evaluate(v2))
class _MirroringSaveable(core_saver.BaseSaverBuilder.SaveableObject):
def __init__(self, primary_variable, mirrored_variable, name):
self._primary_variable = primary_variable
self._mirrored_variable = mirrored_variable
tensor = self._primary_variable.read_value()
spec = core_saver.BaseSaverBuilder.SaveSpec(
tensor=tensor,
slice_spec="",
name=name)
super(_MirroringSaveable, self).__init__(
tensor, [spec], name)
def restore(self, restored_tensors, restored_shapes):
"""Restore the same value into both variables."""
tensor, = restored_tensors
return control_flow_ops.group(
self._primary_variable.assign(tensor),
self._mirrored_variable.assign(tensor))
class _OwnsMirroredVariables(checkpointable.CheckpointableBase):
"""A Checkpointable object which returns a more complex SaveableObject."""
def __init__(self):
self.non_dep_variable = variable_scope.get_variable(
name="non_dep_variable", initializer=6., use_resource=True)
self.mirrored = variable_scope.get_variable(
name="mirrored", initializer=15., use_resource=True)
def _gather_saveables_for_checkpoint(self):
def _saveable_factory(name=self.non_dep_variable.name):
return _MirroringSaveable(
primary_variable=self.non_dep_variable,
mirrored_variable=self.mirrored,
name=name)
return {checkpointable.VARIABLE_VALUE_KEY: _saveable_factory}
# The Saver sorts by name before parsing, so we need a name property.
@property
def name(self):
return self.non_dep_variable.name
class CheckpointingTests(test.TestCase):
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testNamingWithOptimizer(self):
input_value = constant_op.constant([[3.]])
model = MyModel()
# A nuisance Model using the same optimizer. Its slot variables should not
# go in the checkpoint, since it is never depended on.
other_model = MyModel()
optimizer = adam.AdamOptimizer(0.001)
optimizer_step = training_util.get_or_create_global_step()
root_checkpointable = checkpointable_utils.Checkpoint(
optimizer=optimizer, model=model, optimizer_step=optimizer_step)
if context.executing_eagerly():
optimizer.minimize(
lambda: model(input_value),
global_step=optimizer_step)
optimizer.minimize(
lambda: other_model(input_value),
global_step=optimizer_step)
else:
train_op = optimizer.minimize(
model(input_value), global_step=optimizer_step)
optimizer.minimize(
other_model(input_value),
global_step=optimizer_step)
self.evaluate(checkpointable_utils.gather_initializers(
root_checkpointable))
self.evaluate(train_op)
named_variables, serialized_graph = (
checkpointable_utils._serialize_object_graph(root_checkpointable))
expected_checkpoint_names = (
# Created in the root node, so no prefix.
"optimizer_step",
"model/_second/kernel",
"model/_named_dense/kernel",
"model/_named_dense/bias",
# non-Layer dependency of the model
"model/_non_layer/a_variable",
# The optimizer creates two non-slot variables
"optimizer/beta1_power",
"optimizer/beta2_power",
# Slot variables
"model/_second/kernel/.OPTIMIZER_SLOT/optimizer/m",
"model/_second/kernel/.OPTIMIZER_SLOT/optimizer/v",
"model/_named_dense/kernel/.OPTIMIZER_SLOT/optimizer/m",
"model/_named_dense/kernel/.OPTIMIZER_SLOT/optimizer/v",
"model/_named_dense/bias/.OPTIMIZER_SLOT/optimizer/m",
"model/_named_dense/bias/.OPTIMIZER_SLOT/optimizer/v",
)
suffix = "/.ATTRIBUTES/VARIABLE_VALUE"
expected_checkpoint_names = [
name + suffix for name in expected_checkpoint_names]
six.assertCountEqual(self, expected_checkpoint_names,
named_variables.keys())
# Check that we've mapped to the right variable objects (not exhaustive)
self.assertEqual(
"global_step:0",
named_variables["optimizer_step" + suffix].name)
self.assertEqual(
"my_model/dense_1/kernel:0",
named_variables["model/_second/kernel" + suffix].name)
self.assertEqual(
"my_model/dense/kernel:0",
named_variables["model/_named_dense/kernel" + suffix].name)
self.assertEqual(
"beta1_power:0",
named_variables["optimizer/beta1_power" + suffix].name)
self.assertEqual(
"beta2_power:0",
named_variables["optimizer/beta2_power" + suffix].name)
# Spot check the generated protocol buffers.
self.assertEqual("optimizer",
serialized_graph.nodes[0].children[1].local_name)
optimizer_node = serialized_graph.nodes[serialized_graph.nodes[0].children[
1].node_id]
self.assertEqual("beta1_power",
optimizer_node.children[0].local_name)
self.assertEqual("beta1_power",
serialized_graph.nodes[optimizer_node.children[0].node_id]
.attributes[0].full_name)
self.assertEqual(
"my_model/dense/kernel",
serialized_graph.nodes[optimizer_node.slot_variables[0]
.original_variable_node_id]
.attributes[0].full_name)
# We strip off the :0 suffix, as variable.name-based saving does.
self.assertEqual(
"my_model/dense/kernel/Adam",
serialized_graph.nodes[optimizer_node.slot_variables[0]
.slot_variable_node_id]
.attributes[0].full_name)
self.assertEqual(
"my_model/dense/kernel/Adam:0",
optimizer.get_slot(
var=named_variables["model/_named_dense/kernel" + suffix],
name="m").name)
self.assertEqual(
"model/_named_dense/kernel" + suffix,
serialized_graph.nodes[
optimizer_node.slot_variables[0]
.original_variable_node_id].attributes[0].checkpoint_key)
self.assertEqual("m", optimizer_node.slot_variables[0].slot_name)
self.assertEqual(
"model/_named_dense/kernel/.OPTIMIZER_SLOT/optimizer/m" + suffix,
serialized_graph.nodes[
optimizer_node.slot_variables[0]
.slot_variable_node_id].attributes[0].checkpoint_key)
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testMoreComplexSaveableReturned(self):
v = _OwnsMirroredVariables()
checkpoint = checkpointable_utils.Checkpoint(v=v)
test_dir = self.get_temp_dir()
prefix = os.path.join(test_dir, "ckpt")
self.evaluate(v.non_dep_variable.assign(42.))
save_path = checkpoint.save(prefix)
self.evaluate(v.non_dep_variable.assign(43.))
self.evaluate(v.mirrored.assign(44.))
checkpoint.restore(save_path).assert_consumed().initialize_or_restore()
self.assertEqual(42., self.evaluate(v.non_dep_variable))
self.assertEqual(42., self.evaluate(v.mirrored))
self.evaluate(v.non_dep_variable.assign(44.))
save_path = checkpoint.save(prefix)
self.evaluate(v.non_dep_variable.assign(45.))
checkpoint.restore(save_path).assert_consumed().initialize_or_restore()
self.assertEqual(44., self.evaluate(v.non_dep_variable))
self.assertEqual(44., self.evaluate(v.mirrored))
@test_util.run_in_graph_and_eager_modes()
def testMoreComplexSaveableReturnedWithGlobalName(self):
# The same object can also be saved using the name-based saver.
v = _OwnsMirroredVariables()
saver = core_saver.Saver(var_list=[v])
test_dir = self.get_temp_dir()
prefix = os.path.join(test_dir, "ckpt")
self.evaluate(v.non_dep_variable.assign(42.))
with self.test_session() as sess:
save_path = saver.save(sess, prefix)
self.evaluate(v.non_dep_variable.assign(43.))
self.evaluate(v.mirrored.assign(44.))
saver.restore(sess, save_path)
self.assertEqual(42., self.evaluate(v.non_dep_variable))
self.assertEqual(42., self.evaluate(v.mirrored))
@test_util.run_in_graph_and_eager_modes()
def testSaveRestore(self):
model = MyModel()
optimizer = adam.AdamOptimizer(0.001)
root_checkpointable = checkpointable_utils.Checkpoint(
optimizer=optimizer, model=model)
input_value = constant_op.constant([[3.]])
if context.executing_eagerly():
optimizer.minimize(
lambda: model(input_value))
else:
train_op = optimizer.minimize(model(input_value))
# TODO(allenl): Make initialization more pleasant when graph building.
root_checkpointable.save_counter # pylint: disable=pointless-statement
self.evaluate(checkpointable_utils.gather_initializers(
root_checkpointable))
self.evaluate(train_op)
prefix = os.path.join(self.get_temp_dir(), "ckpt")
self.evaluate(state_ops.assign(model._named_dense.variables[1], [42.]))
m_bias_slot = optimizer.get_slot(model._named_dense.variables[1], "m")
self.evaluate(state_ops.assign(m_bias_slot, [1.5]))
save_path = root_checkpointable.save(file_prefix=prefix)
self.evaluate(state_ops.assign(model._named_dense.variables[1], [43.]))
self.evaluate(state_ops.assign(root_checkpointable.save_counter, 3))
optimizer_variables = self.evaluate(optimizer.variables())
self.evaluate(state_ops.assign(m_bias_slot, [-2.]))
# Immediate restoration
status = root_checkpointable.restore(save_path=save_path).assert_consumed()
status.run_restore_ops()
self.assertAllEqual([42.], self.evaluate(model._named_dense.variables[1]))
self.assertAllEqual(1, self.evaluate(root_checkpointable.save_counter))
self.assertAllEqual([1.5], self.evaluate(m_bias_slot))
if not context.executing_eagerly():
return # Restore-on-create is only supported when executing eagerly
on_create_model = MyModel()
on_create_optimizer = adam.AdamOptimizer(
0.001,
# Preserve beta1_power and beta2_power when appying gradients so we can
# test that they've been restored correctly.
beta1=1.0, beta2=1.0)
on_create_root = checkpointable_utils.Checkpoint(
optimizer=on_create_optimizer, model=on_create_model)
# Deferred restoration
status = on_create_root.restore(save_path=save_path)
on_create_model(constant_op.constant([[3.]])) # create variables
self.assertAllEqual(1, self.evaluate(on_create_root.save_counter))
self.assertAllEqual([42.],
self.evaluate(
on_create_model._named_dense.variables[1]))
on_create_m_bias_slot = on_create_optimizer.get_slot(
on_create_model._named_dense.variables[1], "m")
# Optimizer slot variables are created when the original variable is
# restored.
self.assertAllEqual([1.5], self.evaluate(on_create_m_bias_slot))
self.assertAllEqual(optimizer_variables[2:],
self.evaluate(on_create_optimizer.variables()))
dummy_var = resource_variable_ops.ResourceVariable([1.])
on_create_optimizer.minimize(loss=dummy_var.read_value)
status.assert_consumed()
beta1_power, beta2_power = on_create_optimizer._get_beta_accumulators()
self.assertAllEqual(optimizer_variables[0], self.evaluate(beta1_power))
self.assertAllEqual(optimizer_variables[1], self.evaluate(beta2_power))
# TODO(allenl): Debug garbage created by this test in python3.
def testDeferredRestorationUsageEager(self):
"""An idiomatic eager execution example."""
num_training_steps = 10
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
for training_continuation in range(3):
model = MyModel()
optimizer = adam.AdamOptimizer(0.001)
root = checkpointable_utils.Checkpoint(
optimizer=optimizer, model=model,
optimizer_step=training_util.get_or_create_global_step())
root.restore(core_saver.latest_checkpoint(checkpoint_directory))
for _ in range(num_training_steps):
# TODO(allenl): Use a Dataset and serialize/checkpoint it.
input_value = constant_op.constant([[3.]])
optimizer.minimize(
lambda: model(input_value), # pylint: disable=cell-var-from-loop
global_step=root.optimizer_step)
root.save(file_prefix=checkpoint_prefix)
self.assertEqual((training_continuation + 1) * num_training_steps,
root.optimizer_step.numpy())
def testUsageGraph(self):
"""Expected usage when graph building."""
with context.graph_mode():
num_training_steps = 10
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
for training_continuation in range(3):
with ops.Graph().as_default():
model = MyModel()
optimizer = adam.AdamOptimizer(0.001)
root = checkpointable_utils.Checkpoint(
optimizer=optimizer, model=model,
global_step=training_util.get_or_create_global_step())
input_value = constant_op.constant([[3.]])
train_op = optimizer.minimize(
model(input_value),
global_step=root.global_step)
checkpoint_path = core_saver.latest_checkpoint(checkpoint_directory)
with self.test_session(graph=ops.get_default_graph()) as session:
status = root.restore(save_path=checkpoint_path)
status.initialize_or_restore(session=session)
if checkpoint_path is None:
self.assertEqual(0, training_continuation)
with self.assertRaises(AssertionError):
status.assert_consumed()
else:
status.assert_consumed()
for _ in range(num_training_steps):
session.run(train_op)
root.save(file_prefix=checkpoint_prefix, session=session)
self.assertEqual((training_continuation + 1) * num_training_steps,
session.run(root.global_step))
self.assertEqual(training_continuation + 1,
session.run(root.save_counter))
@test_util.run_in_graph_and_eager_modes()
def testAgnosticUsage(self):
"""Graph/eager agnostic usage."""
# Does create garbage when executing eagerly due to ops.Graph() creation.
num_training_steps = 10
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
for training_continuation in range(3):
with ops.Graph().as_default(), self.test_session(
graph=ops.get_default_graph()), test_util.device(use_gpu=True):
model = MyModel()
optimizer = adam.AdamOptimizer(0.001)
root = checkpointable_utils.Checkpoint(
optimizer=optimizer, model=model,
global_step=training_util.get_or_create_global_step())
checkpoint_path = core_saver.latest_checkpoint(checkpoint_directory)
status = root.restore(save_path=checkpoint_path)
input_value = constant_op.constant([[3.]])
train_fn = functools.partial(
optimizer.minimize,
functools.partial(model, input_value),
global_step=root.global_step)
if not context.executing_eagerly():
train_fn = functools.partial(self.evaluate, train_fn())
status.initialize_or_restore()
for _ in range(num_training_steps):
train_fn()
root.save(file_prefix=checkpoint_prefix)
self.assertEqual((training_continuation + 1) * num_training_steps,
self.evaluate(root.global_step))
self.assertEqual(training_continuation + 1,
self.evaluate(root.save_counter))
def _get_checkpoint_name(self, name):
root = checkpointable.Checkpointable()
checkpointable_utils.add_variable(
root, name=name, shape=[1, 2], dtype=dtypes.float64)
named_variables, _ = checkpointable_utils._serialize_object_graph(root)
checkpoint_name, = named_variables.keys()
with ops.name_scope("root/" + checkpoint_name):
pass # Make sure we can use this as an op name if we prefix it.
return checkpoint_name
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testVariableNameEscaping(self):
suffix = "/.ATTRIBUTES/VARIABLE_VALUE"
self.assertEqual(r"a.Sb.Sc" + suffix, self._get_checkpoint_name(r"a/b/c"))
self.assertEqual(r"b" + suffix, self._get_checkpoint_name(r"b"))
self.assertEqual(r"c.S" + suffix, self._get_checkpoint_name(r"c/"))
self.assertEqual(r"d.S..S" + suffix, self._get_checkpoint_name(r"d/.S"))
self.assertEqual(r"d.S..ATTRIBUTES.Sf" + suffix,
self._get_checkpoint_name(r"d/.ATTRIBUTES/f"))
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testNumberedPath(self):
root = checkpointable.Checkpointable()
leaf = checkpointable.Checkpointable()
root.leaf = leaf
checkpointable_utils.add_variable(leaf, name="v", shape=[])
named_variables, _ = checkpointable_utils._serialize_object_graph(root)
variable_name, = named_variables.keys()
self.assertEqual(r"leaf/v/.ATTRIBUTES/VARIABLE_VALUE", variable_name)
@test_util.run_in_graph_and_eager_modes()
def testLocalNameValidation(self):
root = checkpointable.Checkpointable()
leaf = checkpointable.Checkpointable()
# Dots are escaped, which avoids conflicts with reserved names.
root._track_checkpointable(leaf, name=".ATTRIBUTES")
checkpointable_utils.add_variable(checkpointable=leaf, name="a", shape=[])
named_variables, _ = checkpointable_utils._serialize_object_graph(root)
name, = named_variables.keys()
self.assertEqual(name, "..ATTRIBUTES/a/.ATTRIBUTES/VARIABLE_VALUE")
def testAnonymousVarsInInit(self):
class Model(training.Model):
def __init__(self):
super(Model, self).__init__()
self.w = resource_variable_ops.ResourceVariable(0.0)
self.b = resource_variable_ops.ResourceVariable(0.0)
self.vars = [self.w, self.b]
def call(self, x):
return x * self.w + self.b
with context.eager_mode():
model = Model()
optimizer = adam.AdamOptimizer(learning_rate=0.05)
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
checkpoint = checkpointable_utils.Checkpoint(
model=model, optimizer=optimizer)
for _ in range(2):
checkpoint.save(checkpoint_prefix)
with backprop.GradientTape() as tape:
loss = (constant_op.constant(1.)
- model(constant_op.constant(1.))) ** 2
grad = tape.gradient(loss, model.vars)
optimizer.apply_gradients(
[(g, v) for g, v in zip(grad, model.vars)])
@test_util.run_in_graph_and_eager_modes()
def testLateDependencyTracking(self):
class Dependency(checkpointable.Checkpointable):
def build(self):
self.var = checkpointable_utils.add_variable(
self, "var", initializer=0.)
class LateDependencies(checkpointable.Checkpointable):
def add_dep(self):
self.dep = Dependency()
self.dep.build()
original = LateDependencies()
original.add_dep()
self.evaluate(state_ops.assign(original.dep.var, 123.))
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
save_path = checkpointable_utils.CheckpointableSaver(
original).save(checkpoint_prefix)
load_into = LateDependencies()
status = checkpointable_utils.CheckpointableSaver(
load_into).restore(save_path)
with self.assertRaises(AssertionError):
status.assert_consumed()
load_into.add_dep()
status.assert_consumed()
status.run_restore_ops()
self.assertEqual(123., self.evaluate(load_into.dep.var))
@test_util.run_in_graph_and_eager_modes()
def testDepAfterVar(self):
class Dependency(checkpointable.Checkpointable):
def build(self):
self.var = checkpointable_utils.add_variable(
self, "var", initializer=0.)
class DepAfterVar(checkpointable.Checkpointable):
def add_dep(self):
dep = Dependency()
dep.build()
self.dep = dep
dep_after_var = DepAfterVar()
dep_after_var.add_dep()
self.evaluate(state_ops.assign(dep_after_var.dep.var, -14.))
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
save_path = checkpointable_utils.CheckpointableSaver(dep_after_var).save(
checkpoint_prefix)
loaded_dep_after_var = DepAfterVar()
status = checkpointable_utils.CheckpointableSaver(
loaded_dep_after_var).restore(save_path)
loaded_dep_after_var.add_dep()
status.assert_consumed()
status.run_restore_ops()
self.assertEqual(-14., self.evaluate(loaded_dep_after_var.dep.var))
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testDeferredSlotRestoration(self):
checkpoint_directory = self.get_temp_dir()
root = checkpointable.Checkpointable()
root.var = checkpointable_utils.add_variable(
root, name="var", initializer=0.)
optimizer = adam.AdamOptimizer(0.1)
if context.executing_eagerly():
optimizer.minimize(root.var.read_value)
else:
train_op = optimizer.minimize(root.var)
# Note that `optimizer` has not been added as a dependency of
# `root`. Create a one-off grouping so that slot variables for `root.var`
# get initialized too.
self.evaluate(checkpointable_utils.gather_initializers(
checkpointable_utils.Checkpoint(root=root, optimizer=optimizer)))
self.evaluate(train_op)
self.evaluate(state_ops.assign(root.var, 12.))
no_slots_path = checkpointable_utils.CheckpointableSaver(root).save(
os.path.join(checkpoint_directory, "no_slots"))
root.optimizer = optimizer
self.evaluate(state_ops.assign(root.var, 13.))
self.evaluate(state_ops.assign(optimizer.get_slot(name="m", var=root.var),
14.))
slots_path = checkpointable_utils.CheckpointableSaver(root).save(
os.path.join(checkpoint_directory, "with_slots"))
new_root = checkpointable.Checkpointable()
# Load the slot-containing checkpoint (deferred), then immediately overwrite
# the non-slot variable (also deferred).
slot_status = checkpointable_utils.CheckpointableSaver(
new_root).restore(slots_path)
no_slot_status = checkpointable_utils.CheckpointableSaver(
new_root).restore(no_slots_path)
with self.assertRaises(AssertionError):
no_slot_status.assert_consumed()
new_root.var = checkpointable_utils.add_variable(
new_root, name="var", shape=[])
no_slot_status.assert_consumed()
no_slot_status.run_restore_ops()
self.assertEqual(12., self.evaluate(new_root.var))
new_root.optimizer = adam.AdamOptimizer(0.1)
with self.assertRaisesRegexp(AssertionError, "beta1_power"):
slot_status.assert_consumed()
self.assertEqual(12., self.evaluate(new_root.var))
if context.executing_eagerly():
# Slot variables are only created with restoring initializers when
# executing eagerly.
self.assertEqual(14., self.evaluate(
new_root.optimizer.get_slot(name="m", var=new_root.var)))
else:
self.assertIs(new_root.optimizer.get_slot(name="m", var=new_root.var),
None)
if context.executing_eagerly():
new_root.optimizer.minimize(new_root.var.read_value)
else:
train_op = new_root.optimizer.minimize(new_root.var)
# The slot variable now exists; restore() didn't create it, but we should
# now have a restore op for it.
slot_status.run_restore_ops()
self.assertEqual(14., self.evaluate(
new_root.optimizer.get_slot(name="m", var=new_root.var)))
self.evaluate(train_op)
slot_status.assert_consumed()
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testOverlappingRestores(self):
checkpoint_directory = self.get_temp_dir()
save_root = checkpointable.Checkpointable()
save_root.dep = checkpointable.Checkpointable()
save_root.dep.var = checkpointable_utils.add_variable(
save_root.dep, name="var", initializer=0.)
self.evaluate(state_ops.assign(save_root.dep.var, 12.))
saver = checkpointable_utils.CheckpointableSaver(save_root)
first_path = saver.save(os.path.join(checkpoint_directory, "first"))
self.evaluate(state_ops.assign(save_root.dep.var, 13.))
second_path = saver.save(os.path.join(checkpoint_directory, "second"))
first_root = checkpointable.Checkpointable()
second_root = checkpointable.Checkpointable()
first_status = checkpointable_utils.CheckpointableSaver(
first_root).restore(first_path)
second_status = checkpointable_utils.CheckpointableSaver(
second_root).restore(second_path)
load_dep = checkpointable.Checkpointable()
load_dep.var = checkpointable_utils.add_variable(
load_dep, name="var", shape=[])
first_root.dep = load_dep
first_status.assert_consumed()
first_status.run_restore_ops()
self.assertEqual(12., self.evaluate(load_dep.var))
second_root.dep = load_dep
second_status.assert_consumed()
second_status.run_restore_ops()
self.assertEqual(13., self.evaluate(load_dep.var))
# Try again with the order of the restore() reversed. The last restore
# determines the final value.
first_root = checkpointable.Checkpointable()
second_root = checkpointable.Checkpointable()
second_status = checkpointable_utils.CheckpointableSaver(
second_root).restore(second_path)
first_status = checkpointable_utils.CheckpointableSaver(
first_root).restore(first_path)
load_dep = checkpointable.Checkpointable()
load_dep.var = checkpointable_utils.add_variable(
load_dep, name="var", shape=[])
first_root.dep = load_dep
first_status.assert_consumed()
first_status.run_restore_ops()
self.assertEqual(12., self.evaluate(load_dep.var))
second_root.dep = load_dep
second_status.assert_consumed()
second_status.run_restore_ops()
self.assertEqual(12., self.evaluate(load_dep.var))
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testAmbiguousLoad(self):
# Not OK to split one checkpoint object into two
checkpoint_directory = self.get_temp_dir()
save_root = checkpointable.Checkpointable()
save_root.dep_one = checkpointable.Checkpointable()
save_root.dep_two = checkpointable.Checkpointable()
dep_three = checkpointable.Checkpointable()
save_root.dep_one.dep_three = dep_three
save_root.dep_two.dep_three = dep_three
checkpointable_utils.add_variable(dep_three, name="var", initializer=0.)
self.evaluate(checkpointable_utils.gather_initializers(save_root))
save_path = checkpointable_utils.CheckpointableSaver(save_root).save(
os.path.join(checkpoint_directory, "ckpt"))
load_root = checkpointable.Checkpointable()
checkpointable_utils.CheckpointableSaver(load_root).restore(save_path)
load_root.dep_one = checkpointable.Checkpointable()
load_root.dep_two = checkpointable.Checkpointable()
load_root.dep_one.dep_three = checkpointable.Checkpointable()
with self.assertRaisesRegexp(AssertionError,
"resolved to different objects"):
load_root.dep_two.dep_three = checkpointable.Checkpointable()
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testObjectsCombined(self):
# Currently fine to load two checkpoint objects into one Python object
checkpoint_directory = self.get_temp_dir()
save_root = checkpointable.Checkpointable()
save_root.dep_one = checkpointable.Checkpointable()
save_root.dep_two = checkpointable.Checkpointable()
checkpointable_utils.add_variable(
save_root.dep_one, name="var1", initializer=32., dtype=dtypes.float64)
checkpointable_utils.add_variable(
save_root.dep_two, name="var2", initializer=64., dtype=dtypes.float64)
self.evaluate(checkpointable_utils.gather_initializers(save_root))
save_path = checkpointable_utils.CheckpointableSaver(save_root).save(
os.path.join(checkpoint_directory, "ckpt"))
load_root = checkpointable.Checkpointable()
load_root.dep_one = checkpointable.Checkpointable()
load_root.dep_two = load_root.dep_one
v1 = checkpointable_utils.add_variable(
load_root.dep_one, name="var1", shape=[], dtype=dtypes.float64)
v2 = checkpointable_utils.add_variable(
load_root.dep_one, name="var2", shape=[], dtype=dtypes.float64)
status = checkpointable_utils.CheckpointableSaver(load_root).restore(
save_path).assert_consumed()
status.run_restore_ops()
self.assertEqual(32., self.evaluate(v1))
self.assertEqual(64., self.evaluate(v2))
@test_util.run_in_graph_and_eager_modes()
def testDependencyLoop(self):
# Note: this test creates garbage during eager execution because it
# purposefully creates a reference cycle.
first = checkpointable.Checkpointable()
second = checkpointable.Checkpointable()
first.second = second
second.first = first
first.v = checkpointable_utils.add_variable(
first, "v1", initializer=[3., 1., 4.])
second.v = checkpointable_utils.add_variable(
second, "v2", initializer=[1., 1., 2., 3.])
self.evaluate(checkpointable_utils.gather_initializers(first))
checkpoint_directory = self.get_temp_dir()
save_path = checkpointable_utils.CheckpointableSaver(first).save(
os.path.join(checkpoint_directory, "ckpt"))
# Test deferred loading
first_load = checkpointable.Checkpointable()
status = checkpointable_utils.CheckpointableSaver(
first_load).restore(save_path)
second_load = checkpointable.Checkpointable()
first_load.second = second_load
second_load.first = first_load
with self.assertRaises(AssertionError):
status.assert_consumed()
first_load.v = checkpointable_utils.add_variable(
first_load, "v1", shape=[3])
second_load.v = checkpointable_utils.add_variable(
second_load, "v2", shape=[4])
status.assert_consumed()
status.run_restore_ops()
self.assertAllEqual([3., 1., 4.], self.evaluate(first_load.v))
self.assertAllEqual([1., 1., 2., 3.], self.evaluate(second_load.v))
# Test loading when variables have already been created
self.evaluate(first_load.v.assign([2., 7., 1.]))
self.assertAllEqual([2., 7., 1.], self.evaluate(first_load.v))
self.evaluate(second_load.v.assign([2., 7., 1., 8.]))
self.assertAllEqual([2., 7., 1., 8.], self.evaluate(second_load.v))
status = checkpointable_utils.CheckpointableSaver(first_load).restore(
save_path).assert_consumed()
status.run_restore_ops()
self.assertAllEqual([3., 1., 4.], self.evaluate(first_load.v))
self.assertAllEqual([1., 1., 2., 3.], self.evaluate(second_load.v))
@test_util.run_in_graph_and_eager_modes()
def testRestoreOnAssign(self):
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
save_graph = ops.Graph()
with save_graph.as_default(), self.test_session(save_graph):
first = checkpointable.Checkpointable()
first.var1 = variable_scope.get_variable(
name="outside_var", initializer=0.)
first.var2 = variable_scope.get_variable(
name="blah", initializer=0.)
self.evaluate(first.var1.assign(4.))
self.evaluate(first.var2.assign(8.))
save_path = checkpointable_utils.CheckpointableSaver(first).save(
checkpoint_prefix)
restore_graph = ops.Graph()
with restore_graph.as_default(), self.test_session(restore_graph):
second = checkpointable.Checkpointable()
second.var2 = variable_scope.get_variable(
name="blah", initializer=0.)
status = checkpointable_utils.CheckpointableSaver(
second).restore(save_path)
recreated_var1 = variable_scope.get_variable(
name="outside_var", initializer=0.)
status.run_restore_ops()
self.assertEqual(8., self.evaluate(second.var2))
self.evaluate(recreated_var1.assign(-2.))
self.assertEqual(-2., self.evaluate(recreated_var1))
second.var1 = recreated_var1
status.run_restore_ops()
self.assertEqual(4., self.evaluate(recreated_var1))
def testManySavesGraph(self):
"""Saves after the first should not modify the graph."""
with context.graph_mode():
graph = ops.Graph()
with graph.as_default(), self.test_session(graph):
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
obj = checkpointable.Checkpointable()
obj.var = variable_scope.get_variable(name="v", initializer=0.)
obj.opt = adam.AdamOptimizer(0.1)
obj.opt.minimize(obj.var.read_value())
self.evaluate(checkpointable_utils.gather_initializers(obj))
saver = checkpointable_utils.CheckpointableSaver(obj)
saver.save(checkpoint_prefix)
before_ops = graph.get_operations()
saver.save(checkpoint_prefix)
self.assertEqual(before_ops, graph.get_operations())
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testCheckpointCleanup(self):
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
obj = checkpointable.Checkpointable()
obj.var = variable_scope.get_variable(name="v", initializer=0.)
self.evaluate(checkpointable_utils.gather_initializers(obj))
saver = checkpointable_utils.Checkpoint(obj=obj)
for _ in range(10):
saver.save(checkpoint_prefix)
expected_filenames = ["checkpoint"]
for checkpoint_number in range(6, 11):
expected_filenames.append("ckpt-%d.index" % (checkpoint_number,))
expected_filenames.append(
"ckpt-%d.data-00000-of-00001" % (checkpoint_number,))
six.assertCountEqual(
self,
expected_filenames,
os.listdir(checkpoint_directory))
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def testCheckpointCleanupChangingVarList(self):
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
obj = checkpointable.Checkpointable()
obj.var = variable_scope.get_variable(name="v", initializer=0.)
self.evaluate(checkpointable_utils.gather_initializers(obj))
checkpoint = checkpointable_utils.Checkpoint(obj=obj)
looped_variables = []
for iteration in range(10):
new_variable = resource_variable_ops.ResourceVariable(iteration)
self.evaluate(new_variable.initializer)
setattr(checkpoint, "var_%d" % iteration, new_variable)
checkpoint.save(checkpoint_prefix)
looped_variables.append(new_variable)
expected_filenames = ["checkpoint"]
# We've copied the saver each time, but checkpoint management should still
# be consistent.
for checkpoint_number in range(6, 11):
expected_filenames.append("ckpt-%d.index" % (checkpoint_number,))
expected_filenames.append(
"ckpt-%d.data-00000-of-00001" % (checkpoint_number,))
six.assertCountEqual(
self,
expected_filenames,
os.listdir(checkpoint_directory))
for v in looped_variables:
self.evaluate(v.assign(314))
checkpoint.restore(checkpoint_prefix + "-6").run_restore_ops()
self.assertEqual(314, self.evaluate(checkpoint.var_9))
self.assertEqual(314, self.evaluate(checkpoint.var_8))
self.assertEqual(314, self.evaluate(checkpoint.var_6))
self.assertEqual(5, self.evaluate(checkpoint.var_5))
self.assertEqual(1, self.evaluate(checkpoint.var_1))
self.assertEqual(0, self.evaluate(checkpoint.var_0))
if context.executing_eagerly():
checkpoint.restore(checkpoint_prefix + "-10").run_restore_ops()
self.assertEqual(9, self.evaluate(checkpoint.var_9))
self.assertEqual(8, self.evaluate(checkpoint.var_8))
self.assertEqual(1, self.evaluate(checkpoint.var_1))
self.assertEqual(0, self.evaluate(checkpoint.var_0))
else:
# Restoring into modified graphs is an error while graph building.
with self.assertRaises(NotImplementedError):
checkpoint.restore(checkpoint_prefix + "-10").run_restore_ops()
def testManyRestoresGraph(self):
"""Restores after the first should not modify the graph."""
with context.graph_mode():
graph = ops.Graph()
with graph.as_default(), self.test_session(graph):
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
obj = checkpointable.Checkpointable()
obj.var = variable_scope.get_variable(name="v", initializer=0.)
obj.opt = adam.AdamOptimizer(0.1)
obj.opt.minimize(obj.var.read_value())
self.evaluate(checkpointable_utils.gather_initializers(obj))
saver = checkpointable_utils.CheckpointableSaver(obj)
save_path = saver.save(checkpoint_prefix)
saver.restore(save_path)
before_ops = graph.get_operations()
saver.restore(save_path)
self.assertEqual(before_ops, graph.get_operations())
def testMultipleGraphsNonSlotVariables(self):
with context.graph_mode():
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
optimizer = adam.AdamOptimizer(0.001)
# Construct a model in one graph
first_graph = ops.Graph()
first_session = session_lib.Session(graph=first_graph)
with first_graph.as_default(), first_session.as_default():
first_variable = resource_variable_ops.ResourceVariable([1.])
first_root_checkpointable = checkpointable_utils.Checkpoint(
optimizer=optimizer, variable=first_variable)
train_op = optimizer.minimize(first_variable.read_value)
self.evaluate(checkpointable_utils.gather_initializers(
first_root_checkpointable))
self.evaluate(train_op)
self.evaluate(first_variable.assign([1.]))
self.evaluate(optimizer.get_slot(
var=first_variable, name="m").assign([2.]))
beta1_power, _ = optimizer._get_beta_accumulators()
self.evaluate(beta1_power.assign(3.))
# Save and load in a second graph
second_graph = ops.Graph()
with second_graph.as_default(), session_lib.Session(graph=second_graph):
second_variable = resource_variable_ops.ResourceVariable([1.])
second_root_checkpointable = checkpointable_utils.Checkpoint(
optimizer=optimizer, variable=second_variable)
train_op = optimizer.minimize(second_variable.read_value)
second_root_checkpointable.restore(None).initialize_or_restore()
self.evaluate(train_op)
self.evaluate(second_variable.assign([4.]))
self.evaluate(optimizer.get_slot(
var=second_variable, name="m").assign([5.]))
beta1_power, _ = optimizer._get_beta_accumulators()
self.evaluate(beta1_power.assign(6.))
save_path = second_root_checkpointable.save(checkpoint_prefix)
self.evaluate(second_variable.assign([7.]))
self.evaluate(optimizer.get_slot(
var=second_variable, name="m").assign([8.]))
beta1_power, _ = optimizer._get_beta_accumulators()
self.assertAllEqual(6., self.evaluate(beta1_power))
status = second_root_checkpointable.restore(save_path)
status.assert_consumed().run_restore_ops()
self.assertAllEqual([4.], self.evaluate(second_variable))
self.assertAllEqual([5.], self.evaluate(optimizer.get_slot(
var=second_variable, name="m")))
beta1_power, _ = optimizer._get_beta_accumulators()
self.assertAllEqual(6., self.evaluate(beta1_power))
# Check that the first graph is unmolested
with first_graph.as_default(), first_session.as_default():
self.assertAllEqual([1.], self.evaluate(first_variable))
self.assertAllEqual([2.], self.evaluate(optimizer.get_slot(
var=first_variable, name="m")))
beta1_power, _ = optimizer._get_beta_accumulators()
self.assertAllEqual(3., self.evaluate(beta1_power))
class TemplateTests(test.TestCase):
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def test_checkpointable_save_restore(self):
def _templated():
v = variable_scope.get_variable(
"v", shape=[1], initializer=init_ops.zeros_initializer())
v2 = variable_scope.get_variable(
"v2", shape=[1], initializer=init_ops.zeros_initializer())
return v, v + 1., v2
save_template = template.make_template("s1", _templated)
save_root = checkpointable_utils.Checkpoint(my_template=save_template)
v1_save, _, v2_save = save_template()
self.evaluate(v1_save.assign([12.]))
self.evaluate(v2_save.assign([14.]))
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
save_path = save_root.save(checkpoint_prefix)
load_template = template.make_template("s2", _templated)
load_root = checkpointable_utils.Checkpoint(my_template=load_template)
status = load_root.restore(save_path)
var, var_plus_one, var2 = load_template()
self.assertEqual(2, len(load_template._checkpoint_dependencies))
self.assertEqual("v", load_template._checkpoint_dependencies[0].name)
self.assertEqual("v2", load_template._checkpoint_dependencies[1].name)
status.assert_consumed().run_restore_ops()
self.assertAllEqual([12.], self.evaluate(var))
self.assertAllEqual([13.], self.evaluate(var_plus_one))
self.assertAllEqual([14.], self.evaluate(var2))
@test_util.run_in_graph_and_eager_modes(assert_no_eager_garbage=True)
def test_checkpointable_save_restore_nested(self):
def _inner_template():
v = variable_scope.get_variable(
"v", shape=[1], initializer=init_ops.zeros_initializer())
return v
def _outer_template():
first_inner = template.make_template("i1", _inner_template)
second_inner = template.make_template("i2", _inner_template)
v1 = first_inner()
v2 = second_inner()
v3 = second_inner()
return (first_inner, second_inner), (v1, v2, v3)
with variable_scope.variable_scope("ignored"):
save_template = template.make_template("s1", _outer_template)
save_root = checkpointable_utils.Checkpoint(my_template=save_template)
(inner_template_one, inner_template_two), _ = save_template()
self.evaluate(inner_template_one.variables[0].assign([20.]))
self.evaluate(inner_template_two.variables[0].assign([25.]))
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
save_path = save_root.save(checkpoint_prefix)
load_template = template.make_template("s2", _outer_template)
load_root = checkpointable_utils.Checkpoint(my_template=load_template)
status = load_root.restore(save_path)
(inner_template_one, inner_template_two), (v1, v2, v3) = load_template()
outer_template_dependencies = load_root.my_template._checkpoint_dependencies
self.assertEqual(2, len(outer_template_dependencies))
self.assertEqual("i1", outer_template_dependencies[0].name)
self.assertIs(inner_template_one, outer_template_dependencies[0].ref)
self.assertEqual("i2", outer_template_dependencies[1].name)
self.assertIs(inner_template_two, outer_template_dependencies[1].ref)
self.assertEqual(1, len(inner_template_one._checkpoint_dependencies))
self.assertEqual("v", inner_template_one._checkpoint_dependencies[0].name)
self.assertEqual(1, len(inner_template_two._checkpoint_dependencies))
self.assertEqual("v", inner_template_two._checkpoint_dependencies[0].name)
status.assert_consumed().run_restore_ops()
self.assertAllEqual([20.], self.evaluate(v1))
self.assertAllEqual([25.], self.evaluate(v2))
self.assertAllEqual([25.], self.evaluate(v3))
class CheckpointCompatibilityTests(test.TestCase):
def _initialized_model(self):
input_value = constant_op.constant([[3.]])
model = MyModel()
optimizer = adam.AdamOptimizer(0.001)
optimizer_step = training_util.get_or_create_global_step()
root_checkpointable = checkpointable_utils.Checkpoint(
optimizer=optimizer, model=model, optimizer_step=optimizer_step)
train_op = optimizer.minimize(
functools.partial(model, input_value),
global_step=optimizer_step)
self.evaluate(checkpointable_utils.gather_initializers(
root_checkpointable))
self.evaluate(train_op)
# A regular variable, a slot variable, and a non-slot Optimizer variable
# with known values to check when loading.
self.evaluate(model._named_dense.bias.assign([1.]))
self.evaluate(optimizer.get_slot(
var=model._named_dense.bias, name="m").assign([2.]))
beta1_power, _ = optimizer._get_beta_accumulators()
self.evaluate(beta1_power.assign(3.))
return root_checkpointable
def _set_sentinels(self, root_checkpointable):
self.evaluate(root_checkpointable.model._named_dense.bias.assign([101.]))
self.evaluate(
root_checkpointable.optimizer.get_slot(
var=root_checkpointable.model._named_dense.bias, name="m")
.assign([102.]))
beta1_power, _ = root_checkpointable.optimizer._get_beta_accumulators()
self.evaluate(beta1_power.assign(103.))
def _check_sentinels(self, root_checkpointable):
self.assertAllEqual(
[1.], self.evaluate(root_checkpointable.model._named_dense.bias))
self.assertAllEqual([2.], self.evaluate(
root_checkpointable.optimizer.get_slot(
var=root_checkpointable.model._named_dense.bias, name="m")))
beta1_power, _ = root_checkpointable.optimizer._get_beta_accumulators()
self.assertAllEqual(3., self.evaluate(beta1_power))
def _write_name_based_checkpoint(self):
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
with context.graph_mode():
save_graph = ops.Graph()
with save_graph.as_default(), self.test_session(
graph=save_graph) as session:
root = self._initialized_model()
name_saver = core_saver.Saver()
return name_saver.save(
sess=session, save_path=checkpoint_prefix,
global_step=root.optimizer_step)
@test_util.run_in_graph_and_eager_modes()
def testLoadFromNameBasedSaver(self):
"""Save a name-based checkpoint, load it using the object-based API."""
with test_util.device(use_gpu=True):
save_path = self._write_name_based_checkpoint()
root = self._initialized_model()
self._set_sentinels(root)
with self.assertRaises(AssertionError):
self._check_sentinels(root)
object_saver = checkpointable_utils.CheckpointableSaver(root)
status = object_saver.restore(save_path)
with self.assertRaises(AssertionError):
status.assert_consumed()
status.run_restore_ops()
self._check_sentinels(root)
self._set_sentinels(root)
status.initialize_or_restore()
self._check_sentinels(root)
# TODO(allenl): Test for the core name-based saver loading object-based
# checkpoints once object-based checkpointing is in core.
def testSaveGraphLoadEager(self):
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
with context.graph_mode():
save_graph = ops.Graph()
with save_graph.as_default(), self.test_session(
graph=save_graph) as session:
root = self._initialized_model()
object_saver = checkpointable_utils.CheckpointableSaver(root)
save_path = object_saver.save(
session=session, file_prefix=checkpoint_prefix)
with context.eager_mode():
root = self._initialized_model()
self._set_sentinels(root)
root.restore(save_path).assert_consumed()
self._check_sentinels(root)
def testSaveEagerLoadGraph(self):
checkpoint_directory = self.get_temp_dir()
checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
with context.eager_mode():
root = self._initialized_model()
object_saver = checkpointable_utils.CheckpointableSaver(root)
save_path = object_saver.save(file_prefix=checkpoint_prefix)
with context.graph_mode():
save_graph = ops.Graph()
with save_graph.as_default(), self.test_session(
graph=save_graph):
root = self._initialized_model()
self._set_sentinels(root)
root.restore(save_path).assert_consumed().run_restore_ops()
self._check_sentinels(root)
if __name__ == "__main__":
test.main()
|
Do some children outgrow asthma?
Asthma symptoms that start in childhood can disappear later in life. Sometimes, however, a child's asthma goes away temporarily, only to return a few years later. But other children with asthma — particularly those with severe asthma — never outgrow it.
In young children, it can be hard to tell whether signs and symptoms such as coughing, wheezing and shortness of breath are caused by asthma or something else. Sometimes, what seems to be asthma turns out to be another condition, such as bronchitis, recurrent pneumonia or bronchiolitis. These and a number of other asthma-like conditions typically improve as children get older.
Children with more-severe asthma are less likely to outgrow it. Persistent wheezing and a history of allergies, especially to furry animals, also increase the odds that your child won't outgrow asthma.
It's important to diagnose and treat childhood asthma early on. Work with your child's doctor to manage your child's asthma. A written asthma action plan can help you track symptoms, adjust medications and help your child avoid asthma triggers. As your child gets older, involve him or her in the development of the action plan.
Litonjua AA, et al. Natural history of asthma. http://www.uptodate.com/home. Accessed Nov. 10. 2016.
Andersson M, et al. Remission and persistence of asthma followed from 7 to 19 years of age. Pediatrics. 2013;132:e435.
Hay WW, et al. Allergic disorders. In: Current Diagnosis & Treatment: Pediatrics. 22nd ed. New York, N.Y.: McGraw-Hill Education; 2014. http://www.accessmedicine.com. Accessed Nov. 11, 2016.
Expert panel report 3 (EPR-3): Guidelines for the diagnosis and management of asthma. Bethesda, Md.: National Institutes of Health. http://www.nhlbi.nih.gov/health-pro/guidelines/current/asthma-guidelines/full-report. Accessed Nov. 11, 2016.
|
#!/usr/bin/env python
import sys
from optparse import OptionParser
from bisect import bisect_left, bisect_right
from coinunifier.wallet.factory import load_wallet
##
## Process arguments
##
USAGE = ''''
% unify_coins_simple.py [OPTIONS] KIND THRESHOLD ADDRESS
KIND: kind of coin (e.g. bitcoin, litecoin, ...)
THRESHOLD: threshold amount
ADDRESS: address to send coins'''
DESCRIPTION = \
'Make a free transaction with sub-THRESHOLD coins and a least' \
' large-amount-and-high-priority coin. Then, send minimul amount of' \
' coins (== DUST_SOFT_LIMIT) to the ADDRESS by using the inputs and' \
' deposit the change. This script is useful to unify sub-threshold coins' \
' into one without fee.'
optparser = OptionParser(USAGE, description=DESCRIPTION)
optparser.add_option('', '--no-dry-run',
action='store_false', dest='dryrun', default=True,
help='Broadcast a transaction to nodes')
(opts, args) = optparser.parse_args()
if len(args) != 3:
optparser.error("Incorrect number of arguments.")
kind = args[0]
theta = int(float(args[1]) * 10**8)
address = args[2]
##
## Functions
##
def coins2inputs(coins):
res = []
for c in coins:
res.append({"txid": c['txid'], "vout": c['vout']})
return res
def cumsum(ls):
res = list(ls) # shallow copy
for i in range(1, len(res)): res[i] += res[i-1]
return res
# Unify sub-threshold coins to a large-amount-and-high-priority coin
#
# O(n log n)
def unify_coins_simple(wallet, coins):
n = len(coins)
remain = wallet.free_tx_size-1 - wallet.base_size - 2*wallet.output_size
maxin = min(n, int(remain / wallet.input_size))
coins.sort(key=lambda x: x['amount'])
amounts = [c['amount'] for c in coins]
prios = [c['prio'] for c in coins]
camounts = cumsum(amounts)
cprios = cumsum(prios)
hiprios = list(prios)
for i in range(len(prios)-1, 0, -1):
hiprios[i-1] = max(hiprios[i-1], hiprios[i])
num = min(bisect_right(amounts, theta), maxin-1)
if num == 0:
print('No sub-threshold coins found')
return
# Determine included sub-threshold coins by binary search in (left, right]
left = 0
right = num
while left < right:
# use coins in range [0, m) and a large coin
m = int((left + right + 1) / 2)
size = wallet.base_size + (m+1)*wallet.input_size + 2*wallet.output_size
index = bisect_left(amounts, 2*wallet.dust_soft_limit - camounts[m-1],
lo=m)
if cprios[m-1]+hiprios[index] < wallet.prio_threshold*size:
# decrease size
right = m-1
else:
# increase size
left = m
num = left
if num == 0:
print('No large coin found')
return
size = wallet.base_size + (num+1)*wallet.input_size + 2*wallet.output_size
# Find a large coin
index = bisect_left(amounts, 2*wallet.dust_soft_limit - camounts[num-1],
lo=num)
while cprios[num-1]+prios[index] < wallet.prio_threshold*size:
index += 1
res = coins[0:num]
res.append(coins[index])
inputs = coins2inputs(res)
if opts.dryrun:
print('Inputs (confirmations amount)')
for c in res:
print(' %6d %.8f' % (c['confirmations'],
float(c['amount']) / 10**8))
wallet.show_send_info(inputs, address, wallet.dust_soft_limit)
print('Add --no-dry-run option to proceed')
else:
print(wallet.send(inputs, address, wallet.dust_soft_limit))
##
## Main
##
wallet = load_wallet(kind)
wallet.connect()
unify_coins_simple(wallet, wallet.unspent_coins())
|
If you’re looking for ways to spice up your food or yourself, why not go to the company that started it all? Watkins baking products are the way folks around the world get creative in the kitchen. They were America’s first apothecary manufacturer, founded in 1868 and still based in Winona, Minnesota to this day. Although they started out in personal care, they have become the premier maker of spices, powders and flavors made from all-natural ingredients. Add them to your next dish and see what folks have been missing! !
Whether you’re making breakfast, dinner or dessert, trust gourmet baking ingredients that bear the J.R. Watkins name. Their flavor extracts include several varieties of vanilla as well as imitation coconut, peppermint, maple, cinnamon and rum tastes. Watkins spices are bursting with freshness, and adding garlic powder, ground ginger, paprika or parsley flakes can really give a meal some pep. Feeling overwhelmed in the kitchen? Watkins seasoning mixes make it easier to whip up flavorful tacos, chili or gravy. And you can get back to the company’s roots with body washes and other personal hygiene products. Life the natural lifestyle by outfitting your pantry or bathroom cupboard at Farm and Home Supply.
|
########
# Copyright (c) 2014 GigaSpaces Technologies Ltd. All rights reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# * See the License for the specific language governing permissions and
# * limitations under the License.
import sys
import time
import uuid
import Queue
from cloudify import utils
from cloudify import exceptions
from cloudify.workflows import api
from cloudify.celery.app import get_celery_app
from cloudify.manager import get_node_instance
from cloudify.constants import MGMTWORKER_QUEUE
INFINITE_TOTAL_RETRIES = -1
DEFAULT_TOTAL_RETRIES = INFINITE_TOTAL_RETRIES
DEFAULT_RETRY_INTERVAL = 30
DEFAULT_SUBGRAPH_TOTAL_RETRIES = 0
DEFAULT_SEND_TASK_EVENTS = True
TASK_PENDING = 'pending'
TASK_SENDING = 'sending'
TASK_SENT = 'sent'
TASK_STARTED = 'started'
TASK_RESCHEDULED = 'rescheduled'
TASK_SUCCEEDED = 'succeeded'
TASK_FAILED = 'failed'
TERMINATED_STATES = [TASK_RESCHEDULED, TASK_SUCCEEDED, TASK_FAILED]
DISPATCH_TASK = 'cloudify.dispatch.dispatch'
INSPECT_TIMEOUT = 30
def retry_failure_handler(task):
"""Basic on_success/on_failure handler that always returns retry"""
return HandlerResult.retry()
class WorkflowTask(object):
"""A base class for workflow tasks"""
def __init__(self,
workflow_context,
task_id=None,
info=None,
on_success=None,
on_failure=None,
total_retries=DEFAULT_TOTAL_RETRIES,
retry_interval=DEFAULT_RETRY_INTERVAL,
send_task_events=DEFAULT_SEND_TASK_EVENTS):
"""
:param task_id: The id of this task (generated if none is provided)
:param info: A short description of this task (for logging)
:param on_success: A handler called when the task's execution
terminates successfully.
Expected to return one of
[HandlerResult.retry(), HandlerResult.cont()]
to indicate whether this task should be re-executed.
:param on_failure: A handler called when the task's execution
fails.
Expected to return one of
[HandlerResult.retry(), HandlerResult.ignore(),
HandlerResult.fail()]
to indicate whether this task should be re-executed,
cause the engine to terminate workflow execution
immediately or simply ignore this task failure and
move on.
:param total_retries: Maximum retry attempt for this task, in case
the handlers return a retry attempt.
:param retry_interval: Number of seconds to wait between retries
:param workflow_context: the CloudifyWorkflowContext instance
"""
self.id = task_id or str(uuid.uuid4())
self._state = TASK_PENDING
self.async_result = None
self.on_success = on_success
self.on_failure = on_failure
self.info = info
self.error = None
self.total_retries = total_retries
self.retry_interval = retry_interval
self.terminated = Queue.Queue(maxsize=1)
self.is_terminated = False
self.workflow_context = workflow_context
self.send_task_events = send_task_events
self.containing_subgraph = None
self.current_retries = 0
# timestamp for which the task should not be executed
# by the task graph before reached, overridden by the task
# graph during retries
self.execute_after = time.time()
def dump(self):
return {
'id': self.id,
'state': self.get_state(),
'info': self.info,
'error': self.error,
'current_retries': self.current_retries,
'cloudify_context': self.cloudify_context
}
def is_remote(self):
"""
:return: Is this a remote task
"""
return not self.is_local()
def is_local(self):
"""
:return: Is this a local task
"""
raise NotImplementedError('Implemented by subclasses')
def is_nop(self):
"""
:return: Is this a NOP task
"""
return False
def get_state(self):
"""
Get the task state
:return: The task state [pending, sending, sent, started,
rescheduled, succeeded, failed]
"""
return self._state
def set_state(self, state):
"""
Set the task state
:param state: The state to set [pending, sending, sent, started,
rescheduled, succeeded, failed]
"""
if state not in [TASK_PENDING, TASK_SENDING, TASK_SENT, TASK_STARTED,
TASK_RESCHEDULED, TASK_SUCCEEDED, TASK_FAILED]:
raise RuntimeError('Illegal state set on task: {0} '
'[task={1}]'.format(state, str(self)))
self._state = state
if state in TERMINATED_STATES:
self.is_terminated = True
self.terminated.put_nowait(True)
def wait_for_terminated(self, timeout=None):
if self.is_terminated:
return
self.terminated.get(timeout=timeout)
def handle_task_terminated(self):
if self.get_state() in (TASK_FAILED, TASK_RESCHEDULED):
handler_result = self._handle_task_not_succeeded()
else:
handler_result = self._handle_task_succeeded()
if handler_result.action == HandlerResult.HANDLER_RETRY:
if any([self.total_retries == INFINITE_TOTAL_RETRIES,
self.current_retries < self.total_retries,
handler_result.ignore_total_retries]):
if handler_result.retry_after is None:
handler_result.retry_after = self.retry_interval
if handler_result.retried_task is None:
new_task = self.duplicate_for_retry(
time.time() + handler_result.retry_after)
handler_result.retried_task = new_task
else:
handler_result.action = HandlerResult.HANDLER_FAIL
if self.containing_subgraph:
subgraph = self.containing_subgraph
retried_task = None
if handler_result.action == HandlerResult.HANDLER_FAIL:
handler_result.action = HandlerResult.HANDLER_IGNORE
# It is possible that two concurrent tasks failed.
# we will only consider the first one handled
if not subgraph.failed_task:
subgraph.failed_task = self
subgraph.set_state(TASK_FAILED)
elif handler_result.action == HandlerResult.HANDLER_RETRY:
retried_task = handler_result.retried_task
subgraph.task_terminated(task=self, new_task=retried_task)
return handler_result
def _handle_task_succeeded(self):
"""Call handler for task success"""
if self.on_success:
return self.on_success(self)
else:
return HandlerResult.cont()
def _handle_task_not_succeeded(self):
"""
Call handler for task which hasn't ended in 'succeeded' state
(i.e. has either failed or been rescheduled)
"""
try:
exception = self.async_result.result
except Exception as e:
exception = exceptions.NonRecoverableError(
'Could not de-serialize '
'exception of task {0} --> {1}: {2}'
.format(self.name,
type(e).__name__,
str(e)))
if isinstance(exception, exceptions.OperationRetry):
# operation explicitly requested a retry, so we ignore
# the handler set on the task.
handler_result = HandlerResult.retry()
elif self.on_failure:
handler_result = self.on_failure(self)
else:
handler_result = HandlerResult.retry()
if handler_result.action == HandlerResult.HANDLER_RETRY:
if isinstance(exception, exceptions.NonRecoverableError):
handler_result = HandlerResult.fail()
elif isinstance(exception, exceptions.RecoverableError):
handler_result.retry_after = exception.retry_after
if not self.is_subgraph:
causes = []
if isinstance(exception, (exceptions.RecoverableError,
exceptions.NonRecoverableError)):
causes = exception.causes or []
if isinstance(self, LocalWorkflowTask):
tb = self.async_result._holder.error[1]
causes.append(utils.exception_to_error_cause(exception, tb))
self.workflow_context.internal.send_task_event(
state=self.get_state(),
task=self,
event={'exception': exception, 'causes': causes})
return handler_result
def __str__(self):
suffix = self.info if self.info is not None else ''
return '{0}({1})'.format(self.name, suffix)
def duplicate_for_retry(self, execute_after):
"""
:return: A new instance of this task with a new task id
"""
dup = self._duplicate()
dup.execute_after = execute_after
dup.current_retries = self.current_retries + 1
if dup.cloudify_context and 'operation' in dup.cloudify_context:
op_ctx = dup.cloudify_context['operation']
op_ctx['retry_number'] = dup.current_retries
return dup
def _duplicate(self):
raise NotImplementedError('Implemented by subclasses')
@property
def cloudify_context(self):
raise NotImplementedError('Implemented by subclasses')
@property
def name(self):
"""
:return: The task name
"""
raise NotImplementedError('Implemented by subclasses')
@property
def is_subgraph(self):
return False
class RemoteWorkflowTask(WorkflowTask):
"""A WorkflowTask wrapping a celery based task"""
# cache for registered tasks queries to celery workers
cache = {}
def __init__(self,
kwargs,
cloudify_context,
workflow_context,
task_queue=None,
task_target=None,
task_id=None,
info=None,
on_success=None,
on_failure=retry_failure_handler,
total_retries=DEFAULT_TOTAL_RETRIES,
retry_interval=DEFAULT_RETRY_INTERVAL,
send_task_events=DEFAULT_SEND_TASK_EVENTS):
"""
:param kwargs: The keyword argument this task will be invoked with
:param cloudify_context: the cloudify context dict
:param task_queue: the cloudify context dict
:param task_target: the cloudify context dict
:param task_id: The id of this task (generated if none is provided)
:param info: A short description of this task (for logging)
:param on_success: A handler called when the task's execution
terminates successfully.
Expected to return one of
[HandlerResult.retry(), HandlerResult.cont()]
to indicate whether this task should be re-executed.
:param on_failure: A handler called when the task's execution
fails.
Expected to return one of
[HandlerResult.retry(), HandlerResult.ignore(),
HandlerResult.fail()]
to indicate whether this task should be re-executed,
cause the engine to terminate workflow execution
immediately or simply ignore this task failure and
move on.
:param total_retries: Maximum retry attempt for this task, in case
the handlers return a retry attempt.
:param retry_interval: Number of seconds to wait between retries
:param workflow_context: the CloudifyWorkflowContext instance
"""
super(RemoteWorkflowTask, self).__init__(
workflow_context,
task_id,
info=info,
on_success=on_success,
on_failure=on_failure,
total_retries=total_retries,
retry_interval=retry_interval,
send_task_events=send_task_events)
self._task_target = task_target
self._task_queue = task_queue
self._kwargs = kwargs
self._cloudify_context = cloudify_context
self._cloudify_agent = None
def apply_async(self):
"""
Call the underlying celery tasks apply_async. Verify the worker
is alive and send an event before doing so.
:return: a RemoteWorkflowTaskResult instance wrapping the
celery async result
"""
try:
self._set_queue_kwargs()
self._verify_worker_alive()
task = self.workflow_context.internal.handler.get_task(
self, queue=self._task_queue, target=self._task_target)
self.workflow_context.internal.send_task_event(TASK_SENDING, self)
async_result = self.workflow_context.internal.handler.send_task(
self, task)
self.async_result = RemoteWorkflowTaskResult(self, async_result)
self.set_state(TASK_SENT)
except (exceptions.NonRecoverableError,
exceptions.RecoverableError) as e:
self.set_state(TASK_FAILED)
self.async_result = RemoteWorkflowErrorTaskResult(self, e)
return self.async_result
def is_local(self):
return False
def _duplicate(self):
dup = RemoteWorkflowTask(kwargs=self._kwargs,
task_queue=self.queue,
task_target=self.target,
cloudify_context=self.cloudify_context,
workflow_context=self.workflow_context,
task_id=None, # we want a new task id
info=self.info,
on_success=self.on_success,
on_failure=self.on_failure,
total_retries=self.total_retries,
retry_interval=self.retry_interval,
send_task_events=self.send_task_events)
dup.cloudify_context['task_id'] = dup.id
return dup
@property
def name(self):
"""The task name"""
return self.cloudify_context['task_name']
@property
def cloudify_context(self):
return self._cloudify_context
@property
def target(self):
"""The task target (worker name)"""
return self._task_target
@property
def queue(self):
"""The task queue"""
return self._task_queue
@property
def kwargs(self):
"""kwargs to pass when invoking the task"""
return self._kwargs
def _verify_worker_alive(self):
verify_worker_alive(self.name,
self.target,
self._get_registered)
def _get_registered(self):
tenant = self.workflow_context.tenant
with get_celery_app(tenant=tenant, target=self.target) as app:
worker_name = 'celery@{0}'.format(self.target)
inspect = app.control.inspect(destination=[worker_name],
timeout=INSPECT_TIMEOUT)
registered = inspect.registered()
if registered is None or worker_name not in registered:
return None
return set(registered[worker_name])
def _set_queue_kwargs(self):
if self._task_queue is None:
self._task_queue = self._derive('queue')
if self._task_target is None:
self._task_target = self._derive('name')
self.kwargs['__cloudify_context']['task_queue'] = self._task_queue
self.kwargs['__cloudify_context']['task_target'] = self._task_target
def _derive(self, property_name):
executor = self.cloudify_context['executor']
host_id = self.cloudify_context['host_id']
if executor == 'host_agent':
if self._cloudify_agent is None:
host_node_instance = get_node_instance(host_id)
cloudify_agent = host_node_instance.runtime_properties.get(
'cloudify_agent', {})
if property_name not in cloudify_agent:
raise exceptions.NonRecoverableError(
'Missing cloudify_agent.{0} runtime information. '
'This most likely means that the Compute node was '
'never started successfully'.format(property_name))
self._cloudify_agent = cloudify_agent
return self._cloudify_agent[property_name]
else:
return MGMTWORKER_QUEUE
class LocalWorkflowTask(WorkflowTask):
"""A WorkflowTask wrapping a local callable"""
def __init__(self,
local_task,
workflow_context,
node=None,
info=None,
on_success=None,
on_failure=retry_failure_handler,
total_retries=DEFAULT_TOTAL_RETRIES,
retry_interval=DEFAULT_RETRY_INTERVAL,
send_task_events=DEFAULT_SEND_TASK_EVENTS,
kwargs=None,
task_id=None,
name=None):
"""
:param local_task: A callable
:param workflow_context: the CloudifyWorkflowContext instance
:param node: The CloudifyWorkflowNode instance (if in node context)
:param info: A short description of this task (for logging)
:param on_success: A handler called when the task's execution
terminates successfully.
Expected to return one of
[HandlerResult.retry(), HandlerResult.cont()]
to indicate whether this task should be re-executed.
:param on_failure: A handler called when the task's execution
fails.
Expected to return one of
[HandlerResult.retry(), HandlerResult.ignore(),
HandlerResult.fail()]
to indicate whether this task should be re-executed,
cause the engine to terminate workflow execution
immediately or simply ignore this task failure and
move on.
:param total_retries: Maximum retry attempt for this task, in case
the handlers return a retry attempt.
:param retry_interval: Number of seconds to wait between retries
:param kwargs: Local task keyword arguments
:param name: optional parameter (default: local_task.__name__)
"""
super(LocalWorkflowTask, self).__init__(
info=info,
on_success=on_success,
on_failure=on_failure,
total_retries=total_retries,
retry_interval=retry_interval,
task_id=task_id,
workflow_context=workflow_context,
send_task_events=send_task_events)
self.local_task = local_task
self.node = node
self.kwargs = kwargs or {}
self._name = name or local_task.__name__
def dump(self):
super_dump = super(LocalWorkflowTask, self).dump()
super_dump.update({
'name': self._name
})
return super_dump
def apply_async(self):
"""
Execute the task in the local task thread pool
:return: A wrapper for the task result
"""
def local_task_wrapper():
try:
self.workflow_context.internal.send_task_event(TASK_STARTED,
self)
result = self.local_task(**self.kwargs)
self.workflow_context.internal.send_task_event(
TASK_SUCCEEDED, self, event={'result': str(result)})
self.async_result._holder.result = result
self.set_state(TASK_SUCCEEDED)
except BaseException as e:
new_task_state = TASK_RESCHEDULED if isinstance(
e, exceptions.OperationRetry) else TASK_FAILED
exc_type, exception, tb = sys.exc_info()
self.async_result._holder.error = (exception, tb)
self.set_state(new_task_state)
self.async_result = LocalWorkflowTaskResult(self)
self.workflow_context.internal.send_task_event(TASK_SENDING, self)
self.set_state(TASK_SENT)
self.workflow_context.internal.add_local_task(local_task_wrapper)
return self.async_result
def is_local(self):
return True
def _duplicate(self):
dup = LocalWorkflowTask(local_task=self.local_task,
workflow_context=self.workflow_context,
node=self.node,
info=self.info,
on_success=self.on_success,
on_failure=self.on_failure,
total_retries=self.total_retries,
retry_interval=self.retry_interval,
send_task_events=self.send_task_events,
kwargs=self.kwargs,
name=self.name)
return dup
@property
def name(self):
"""The task name"""
return self._name
@property
def cloudify_context(self):
return self.kwargs.get('__cloudify_context')
# NOP tasks class
class NOPLocalWorkflowTask(LocalWorkflowTask):
def __init__(self, workflow_context):
super(NOPLocalWorkflowTask, self).__init__(lambda: None,
workflow_context)
@property
def name(self):
"""The task name"""
return 'NOP'
def apply_async(self):
self.set_state(TASK_SUCCEEDED)
return LocalWorkflowTaskResult(self)
def is_nop(self):
return True
# Dry run tasks class
class DryRunLocalWorkflowTask(LocalWorkflowTask):
def apply_async(self):
self.workflow_context.internal.send_task_event(TASK_SENDING, self)
self.workflow_context.internal.send_task_event(TASK_STARTED, self)
self.workflow_context.internal.send_task_event(
TASK_SUCCEEDED,
self,
event={'result': 'dry run'}
)
self.set_state(TASK_SUCCEEDED)
return LocalWorkflowTaskResult(self)
def is_nop(self):
return True
class WorkflowTaskResult(object):
"""A base wrapper for workflow task results"""
def __init__(self, task):
self.task = task
def _process(self, retry_on_failure):
if self.task.workflow_context.internal.graph_mode:
return self._get()
task_graph = self.task.workflow_context.internal.task_graph
while True:
self._wait_for_task_terminated()
handler_result = self.task.handle_task_terminated()
task_graph.remove_task(self.task)
try:
result = self._get()
if handler_result.action != HandlerResult.HANDLER_RETRY:
return result
except Exception:
if (not retry_on_failure or
handler_result.action == HandlerResult.HANDLER_FAIL):
raise
self._sleep(handler_result.retry_after)
self.task = handler_result.retried_task
task_graph.add_task(self.task)
self._check_execution_cancelled()
self.task.apply_async()
self._refresh_state()
@staticmethod
def _check_execution_cancelled():
if api.has_cancel_request():
raise api.ExecutionCancelled()
def _wait_for_task_terminated(self):
while True:
self._check_execution_cancelled()
try:
self.task.wait_for_terminated(timeout=1)
break
except Queue.Empty:
continue
def _sleep(self, seconds):
while seconds > 0:
self._check_execution_cancelled()
sleep_time = 1 if seconds > 1 else seconds
time.sleep(sleep_time)
seconds -= sleep_time
def get(self, retry_on_failure=True):
"""
Get the task result.
Will block until the task execution ends.
:return: The task result
"""
return self._process(retry_on_failure)
def _get(self):
raise NotImplementedError('Implemented by subclasses')
def _refresh_state(self):
raise NotImplementedError('Implemented by subclasses')
class RemoteWorkflowErrorTaskResult(WorkflowTaskResult):
def __init__(self, task, exception):
super(RemoteWorkflowErrorTaskResult, self).__init__(task)
self.exception = exception
def _get(self):
raise self.exception
@property
def result(self):
return self.exception
class RemoteWorkflowTaskResult(WorkflowTaskResult):
"""A wrapper for celery's AsyncResult"""
def __init__(self, task, async_result):
super(RemoteWorkflowTaskResult, self).__init__(task)
self.async_result = async_result
def _get(self):
return self.async_result.get()
def _refresh_state(self):
self.async_result = self.task.async_result.async_result
@property
def result(self):
return self.async_result.result
class LocalWorkflowTaskResult(WorkflowTaskResult):
"""A wrapper for local workflow task results"""
class ResultHolder(object):
def __init__(self, result=None, error=None):
self.result = result
self.error = error
def __init__(self, task):
"""
:param task: The LocalWorkflowTask instance
"""
super(LocalWorkflowTaskResult, self).__init__(task)
self._holder = self.ResultHolder()
def _get(self):
if self._holder.error is not None:
exception, traceback = self._holder.error
raise exception, None, traceback
return self._holder.result
def _refresh_state(self):
self._holder = self.task.async_result._holder
@property
def result(self):
if self._holder.error:
return self._holder.error[0]
else:
return self._holder.result
class StubAsyncResult(object):
"""Stub async result that always returns None"""
result = None
class HandlerResult(object):
HANDLER_RETRY = 'handler_retry'
HANDLER_FAIL = 'handler_fail'
HANDLER_IGNORE = 'handler_ignore'
HANDLER_CONTINUE = 'handler_continue'
def __init__(self,
action,
ignore_total_retries=False,
retry_after=None):
self.action = action
self.ignore_total_retries = ignore_total_retries
self.retry_after = retry_after
# this field is filled by handle_terminated_task() below after
# duplicating the task and updating the relevant task fields
# or by a subgraph on_XXX handler
self.retried_task = None
@classmethod
def retry(cls, ignore_total_retries=False, retry_after=None):
return HandlerResult(cls.HANDLER_RETRY,
ignore_total_retries=ignore_total_retries,
retry_after=retry_after)
@classmethod
def fail(cls):
return HandlerResult(cls.HANDLER_FAIL)
@classmethod
def cont(cls):
return HandlerResult(cls.HANDLER_CONTINUE)
@classmethod
def ignore(cls):
return HandlerResult(cls.HANDLER_IGNORE)
def verify_worker_alive(name, target, get_registered):
cache = RemoteWorkflowTask.cache
registered = cache.get(target)
if not registered:
registered = get_registered()
cache[target] = registered
if registered is None:
raise exceptions.RecoverableError(
'Timed out querying worker celery@{0} for its registered '
'tasks. [timeout={1} seconds]'.format(target, INSPECT_TIMEOUT))
if DISPATCH_TASK not in registered:
raise exceptions.NonRecoverableError(
'Missing {0} task in worker {1} \n'
'Registered tasks are: {2}. (This probably means the agent '
'configuration is invalid) [{3}]'.format(
DISPATCH_TASK, target, registered, name))
|
The Charlottesville Chapter of The Links Incorporated presents its sixteenth annual Celebration of the African American Literary Tradition, including brunch, musical and spoken word performances by community youth, a tribute to book festival authors, book sales and signing. Tickets required.
Join G.S. Wilson (Jefferson on Display) in a discussion on how Thomas Jefferson’s image, cultivated through his physical presentation, clothing choices, and etiquette, can offer insight into his complex character and the powerful effect he had on others.
Printmaker Amos Paul Kennedy, Jr. (2019 Frank Riccio Artist-in-Residence) will discuss his collaboration with the Virginia Center for the Book to produce an intergenerational project that celebrates words of wisdom shared by diverse residents of Charlottesville and Albemarle County. This program will include a free public printing demo with Amos Paul Kennedy, Jr. at 12:00 PM, followed by an artist’s talk at 1:00 PM.
Join National Book Critics Circle board members Tess Taylor and Marion Winik as they discuss the NBCC’s work in support of reading, criticism, and literature. Taylor and Winik will be joined by NBCC Literary Award-recognized author Nicole Chung (All You Can Ever Know). The only national literary awards chosen by critics themselves, the 2018 NBCC award longlist is announced in January, and awarded the week before the Festival.
Michelle Damiani (Santa Lucia), Ann Jeffries (Judicial Indiscretion) and Margaret Locke (The Legendary Duke) discuss their steamy romance novels and the unique challenges in writing characters over the course of a series.
Belgian authors Kristien Hemmerechts (The Woman Who Fed the Dogs) and Annelies Verbeke (Thirty Days) discuss their latest novels—stories that are gritty yet humanizing—which mark the first of their titles to be made available in English translation.
James Horn (1619) and Joseph Kelly (Marooned) reexamine well-known origin stories for our country and Commonwealth, presenting groundbreaking new histories of the 1607 arrival of colonists in Jamestown and the 1619 arrival of the first African enslaved people and establishment of Virginia’s first General Assembly. Consequential decisions wrought in harsh conditions led to the birth of democracy, and also laid the cornerstone for racial inequality from the beginning.
Esi Edugyan (Washington Black) and John Edgar Wideman (American Histories) discuss the meanings of race, violence, and freedom, as explored in their acclaimed fiction. Edugyan and Wideman each received the Anisfield-Wolf Book Award for an earlier novel; they are accompanied in conversation with Award jury member Rita Dove.
Samantha Boyette (What Happens When), D. Jackson Leigh (Ordinary is Perfect) and Radclyffe (Passionate Rivals) discuss the issue of diversity—ethnic, racial, sexual, and gender-related—in publishing, the similarities and differences between mainstream and LGBTQI romances, and the nuances of authentic character construction.
|
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
#
# emailIdx - Synchronizes emails from IMAP to Elasticsearch
# Copyright (C) 2015 Paul Hofmann
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
#########################################################################################
# Imports #
#########################################################################################
from M2Crypto import SMIME, BIO
from emailidx import SslKeystore, EmailSerializer
#########################################################################################
# Actual Decryption #
#########################################################################################
def actual_decrypt_message(message, content_type=None, content_transfer_encoding=None):
msg_str = ""
if content_type is not None:
msg_str += "Content-Type: %s\r\n" % content_type
if content_transfer_encoding is not None:
msg_str += "Content-Transfer-Encoding: %s\r\n" % content_transfer_encoding
msg_str += "\r\n%s\r\n" % message
msg_buf = BIO.MemoryBuffer(msg_str)
p7 = SMIME.smime_load_pkcs7_bio(msg_buf)[0]
decrypted_data = None
s = SMIME.SMIME()
for key_pair in SslKeystore.keystore:
s.pkey = key_pair['key']
s.x509 = key_pair['cert']
try:
decrypted_data = s.decrypt(p7)
print "[S/MIME] decrypt with %s : SUCCESS" % key_pair['email']
break
except SMIME.PKCS7_Error:
print "[S/MIME] decrypt with %s : FAILED" % key_pair['email']
continue
return EmailSerializer.serialize_email_raw_message(decrypted_data) if decrypted_data is not None else None
#########################################################################################
# Exposed Functions #
#########################################################################################
def try_decrypt_smime(message_part, crypto_method):
message_part['crypto_method'] = crypto_method
msg_headers = message_part['headers']
content_type = msg_headers['Content-Type'][0] \
if ('Content-Type' in msg_headers) and (len(msg_headers['Content-Type']) > 0) \
else None
content_transfer_encoding = msg_headers['Content-Transfer-Encoding'][0] \
if ('Content-Transfer-Encoding' in msg_headers) and (len(msg_headers['Content-Transfer-Encoding']) > 0) \
else None
msg_dec = actual_decrypt_message(message_part['content'], content_type, content_transfer_encoding)
message_part['message_decrypted'] = msg_dec
message_part['crypto_success'] = msg_dec is not None
def is_smime(message_part, crypto_method):
content_type = message_part['content_type']
if 'smime-type' not in content_type:
return False
return (content_type['_type'] == 'application/pkcs7-mime') and (content_type['smime-type'] == 'enveloped-data')
def __get_content_filter_functions__(settings):
return (is_smime, try_decrypt_smime)
|
Diamond Deli's specialise in authentic Indian snacks and curries. We supply hot snacks and cold curries to take away and also offer a next day collection service on all orders.
We also cater for events, conferences, parties and dinner parties. Please feel welcome to visit our stand outside of Boots in Telford Shopping Centre.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# || ____ _ __
# +------+ / __ )(_) /_______________ _____ ___
# | 0xBC | / __ / / __/ ___/ ___/ __ `/_ / / _ \
# +------+ / /_/ / / /_/ /__/ / / /_/ / / /_/ __/
# || || /_____/_/\__/\___/_/ \__,_/ /___/\___/
#
# Copyright (C) 2011-2013 Bitcraze AB
#
# Crazyflie Nano Quadcopter Client
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software
# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
"""
The about dialog.
"""
__author__ = 'Bitcraze AB'
__all__ = ['AboutDialog']
import sys
from PyQt4 import Qt, QtCore, QtGui, uic
from PyQt4.QtCore import *
from PyQt4.QtGui import *
from PyQt4.Qt import *
import cfclient
import cflib.crtp
(about_widget_class,
about_widget_base_class) = (uic.loadUiType(sys.path[0] +
'/cfclient/ui/dialogs/about.ui'))
debuginfo = """
<b>Cfclient version:</b> {version}<br>
<b>System:</b> {system}<br>
<br>
<b>Interface status</b><br>
{interface_status}
"""
class AboutDialog(QtGui.QWidget, about_widget_class):
def __init__(self, helper, *args):
super(AboutDialog, self).__init__(*args)
self.setupUi(self)
self._close_button.clicked.connect(self.close)
self._name_label.setText(
self._name_label.text().replace('#version#',
cfclient.VERSION))
def showEvent(self, ev):
status_text = ""
interface_status = cflib.crtp.get_interfaces_status()
for s in interface_status.keys():
status_text += "<b>{}</b>: {}<br>\n".format(s, interface_status[s])
self._debug_out.setHtml(debuginfo.format(version=cfclient.VERSION,
system=sys.platform,
interface_status=status_text))
|
Ride in with utmost confidence with the E50B6469 eyeglasses from Graviate by Coolwinks. Directing the top-tier fashion statement with round frames in transparent color, these glasses pave in an ideal way to make yourself known in the vicinity. A terrific match for those with square & oval-shaped faces, these specs pair along with every attire in your wardrobe. Actual product colors may vary slightly from colors shown on your Computer/Mobile Screen.
|
# Nescient: A Python program for packing/unpacking encrypted, salted, and authenticated file containers.
# Copyright (C) 2018 Ariel Antonitis. Licensed under the MIT license.
#
# nescient/crypto/galois.py
""" Classes for creating and interacting with Galois fields (otherwise known as finite fields)
A galois field of order q exists iff q is a prime power.
Elements in fields are represented as integers in the range 0...q-1, or alternatively, polynomials of the form:
x_0*p^0+x_1*p^1+...+x_(n-1)*p^(n-1)
"""
# TODO: Make better class docstrings
import math
class GaloisField:
""" Defines a finite field of order q=p**n, with optional generator g and irreducible polynomial r
Elements are consider to be normal integers in the range 0...q-1 (inclusive)
Can perform the standard operations (add, mult, exponentiation, inversion), optionally using lookup tables
"""
def __init__(self, p, n=1, r=None, g=None, maxMem=2 ** 30):
if p < 2 or n < 1:
raise ValueError('Unable to instantiate a finite field with these arguments')
self.p, self.n = p, n
self.q = self.p ** self.n # Order of the field
self.f = range(self.q) # Iterator for elements in the field
self.g = g
self.r = p if n == 1 else r # Technically reduce by p if this is a prime field
if r is None and n > 1: # If an r was not provided and is required (n > 1), find one
self.r = self.findR()
self.expTable = {}
self.logTable = {}
self.haveTables = False
# If the memory needed to make lookup tables is less than 1 GB (default), calculate them now
if self.q * math.log(self.q, 2) / 8 <= maxMem:
self.makeLookupTables()
self.haveTables = True
# Calculate the unique set of prime factors of n
@staticmethod
def prime_factors(n):
i = 2
factors = set()
while i * i <= n:
if n % i:
i += 1
else:
n //= i
factors.add(i)
if n > 1:
factors.add(n)
return factors
# Euclidean algorithm for gcd
@staticmethod
def gcd(a, b):
while b > 0:
a, b = b, a % b
return a
# Euler's totient function
@staticmethod
def phi(a):
b = a - 1
c = 0
while b > 0:
if not GaloisField.gcd(a, b) - 1:
c += 1
b -= 1
return c
# Given an element x, returns an n+1-element vector representing x as polynomials in GF(p)
def intToPoly(self, x):
return [(x // self.p ** i) % self.p for i in range(self.n + 1)]
# Given a vector of polynomials in GF(p), return the corresponding element (as an integer)
def polyToInt(self, poly):
return sum([self.p ** i * poly[i] for i in range(len(poly))])
# Generates exp & log lookup tables, for increased multiplication speed
def makeLookupTables(self):
if self.g is None or self.generate(self.g) is False: # If a generator was not provided or was invalid, find one
if self.n == 1: # If this is a prime field we can find a generator faster than brute force
pfs = GaloisField.prime_factors(self.q - 1) # Calculate the prime factors of phi(p), equal to p-1
for g in self.f:
s = set()
isGen = True
for pf in pfs:
y = self.pow(g, (self.q - 1) / pf)
if y in s or y == 1:
isGen = False
break
s.add(y)
if isGen:
self.generate(g, False) # g is known to be valid, so no need to double check
self.g = g
return
else: # Otherwise use the brute force method
for g in self.f:
if self.generate(g): # When this is true, tables will be generated as as part of the method call
self.g = g
return
else:
return
raise RuntimeError('Unable to find a generator for the specified field')
# Returns whether g is a generator for the field, also updates exp and log tables accordingly
def generate(self, g, check=True):
if check: # If using this method to check whether the generator is valid, use dictionaries
self.expTable = {}
self.logTable = {}
else: # Otherwise assume g is valid and use lists to optimize for speed
self.expTable = [0] * self.q
self.logTable = [0] * self.q
y = 1
for x in self.f:
if check and y in self.logTable and x != self.q - 1:
return False
self.expTable[x] = y
self.logTable[y] = x
y = self.mult(g, y)
if check and len(self.logTable) != self.q - 1:
return False
self.logTable[1] = 0
return True
# Attempts to find the smallest degree n irreducible polynomial over the field
def findR(self):
for r in range(self.q + self.p, self.q * self.p): # Search only for degree n polynomials
if self.isIrreducible(r):
return r
raise RuntimeError('Unable to find an irreducible polynomial for the specified field')
# Checks whether a given polynomial is irreducible
def isIrreducible(self, r):
for i in range(self.p, self.q):
if self.modP(r, i) == 0:
return False
return True
# Multiplies two elements, without reducing if the product is outside of the field
def multPoly(self, a, b):
if self.n == 1: # Multiplication in a prime field without reduction
return a * b
if self.p == 2: # We can use bitwise operations when p==2
# Multiply each polynomial via bit shifts and xors
c = 0
for i in range(self.n):
if b & (1 << i):
c ^= a * 1 << i
return c
else: # Otherwise operate on polynomial representations of integers
p_a = self.intToPoly(a)
p_b = self.intToPoly(b)
p_c = [0] * 2 * self.n # Need enough space for the x**n * x**n term
# Multiply polynomials mod P (naively)
for i in range(self.n):
for j in range(self.n):
p_c[i + j] += p_a[i] * p_b[j]
p_c[i + j] %= self.p
return self.polyToInt(p_c)
# Calculates the remainder a mod b, performing subtraction of polynomials mod p
# Optionally, continues until the remainder is below some bound
def modP(self, a, b, bound=None):
if self.n == 1: # Mod in prime fields is easy!
return a % b
if bound is None:
bound = b
if self.p == 2: # Mod in 2**n fields is also easy (bitwise)
while a >= bound:
aBits = int(math.log2(a))
bBits = int(math.log2(b))
a ^= b << (aBits - bBits)
return a
else: # Otherwise use the slower polynomial method
p_a = self.intToPoly(a)
p_b = self.intToPoly(b)
while a >= bound:
aPits = int(math.log(a, self.p))
bPits = int(math.log(b, self.p))
for i in range(bPits + 1):
p_a[aPits - bPits + i] -= p_b[i]
p_a[aPits - bPits + i] %= self.p
a = self.polyToInt(p_a)
return a
# Adds two elements in the field
def add(self, a, b):
if self.n == 1: # Addition in a prime field is just modulo p
return (a + b) % self.p
if self.p == 2: # Special case, when p=2, addition is bitwise XOR
return (a ^ b) & (self.q - 1)
else: # Otherwise we need to break integers into polynomial representations and add modulo p
a_p = self.intToPoly(a)
b_p = self.intToPoly(b)
c_p = [(a_p[i] + b_p[i]) % self.p for i in range(self.n)]
return self.polyToInt(c_p)
# Multiplies two elements in the field
def mult(self, a, b):
if self.haveTables: # Use lookup tables if possible
return 0 if (a == 0 or b == 0) else self.expTable[(self.logTable[a] + self.logTable[b]) % (self.q - 1)]
else: # Otherwise use the slower reduction method
return self.modP(self.multPoly(a, b), self.r, bound=self.q)
# Returns the multiplicative inverse of an element using lookup tables
def inverse(self, x):
if self.haveTables: # Use lookup tables if possible
# Technically speaking, 0 has no multiplicative inverse, so just define it as itself
return 0 if x == 0 else self.expTable[self.q - 1 - self.logTable[x]]
else: # TODO Otherwise, well, give up (might do this later, there's an easy way for prime fields)
raise NotImplementedError
# Raise an element in the field to a power
def pow(self, a, b):
if self.haveTables: # Use lookup tables if possible
return 0 if a == 0 else self.expTable[(self.logTable[a] * b) % (self.q - 1)]
elif self.n == 1: # If this is a prime field use Python's modular exponentiation
return pow(a, b, self.p)
else: # Otherwise use exponentiation by repeated squaring
c = 1
while b > 0:
if b % 2 == 0:
a = self.mult(a, a)
b /= 2
else:
c = self.mult(a, c)
b -= 1
return c
# Allows for grabbing GfElement representations by indexing
def __getitem__(self, item):
if 0 <= item < self.q:
return GfElement(item, self)
raise IndexError
class GfElement:
""" Object representation of a GaloisField element.
Allows one to perform intuitive operations on the elements and get the correct results
"""
def __init__(self, val, f):
assert (0 <= val < f.q)
self.f = f
self.val = val
def __add__(self, other):
assert (self.f == other.f)
return self.f.add(self.val, other.val)
def __mul__(self, other):
assert (self.f == other.f)
return self.f.mult(self.val, other.val)
def __pow__(self, power): # Note that power is considered to be an integer, not a GfElement
return self.f.pow(self.val, power)
def __invert__(self):
return self.f.inverse(self.val)
def __str__(self):
return str(self.val)
def __index__(self):
return int(self.val)
def __int__(self):
return int(self.val)
|
Matt Campbell is the toast of college football at the moment, and with good reason. Iowa State is ranked at No. 25 in this week’s edition of the AP Top 25, the Cyclones’ first appearance in the poll since Sept. 25, 2005. In the midst of a 3-game winning streak, Iowa State is 5-2 on the year and controls its own destiny to reach the Big 12 championship game.
That seemed unthinkable upon Campbell’s hiring in Nov. 2015, and still seemed unthinkable just three weeks ago, before Iowa State and its walk-on quarterback went to Norman and beat then-No. 3 Oklahoma.
But Iowa State AD Jamie Pollard recognized Campbell’s potential and knew others could come after his still-just-37-years-old head coach, and planned ahead by protecting the school accordingly.
We’ve reviewed Campbell’s contract and found this section, outlining what Campbell would owe Iowa State should he leave for another head coaching job. In short: he would owe the school his entire remaining contract.
Campbell is under contract through the 2021 season with a salary of $2.1 million in 2017 that raises $100,000 a year, meaning Campbell is in line to earn $2.2 million in 2018, $2.3 million in 2019, $2.4 million in 2020 and $2.5 million in 2021.
Add it all up and you get a $9.4 million buyout should Campbell leave after this season.
It’s worth keeping in mind Campbell is still just 37 and is being paid very well right now. In addition to his $2.1 million salary, Campbell will earn a $500,000 bonus with one more victory and is a candidate (as we sit here in late October) to pick up a $50,000 check for winning the Big 12 Coach of the Year award. He’s also in line for a bonus of $250,000 for winning or sharing the Big 12 regular season championship and $100,000 for winning a bowl game. That would add up to $3 million.
Plus, being the head coach at Iowa State means something to Campbell. Or, at least he said it did when he told this story during his introductory press conference.
Still, this is college football and young, successful coaches are the sport’s hottest commodity. Campbell will have suitors this winter, and at least one connected observer believes the coach already has his bags packed to leave Ames.
Dear AD’s thinking of making a new hire I’ll save you your coaching search firm fee-go DIRECTLY to Ames, IA and get Matty Campbell FAST!!
Herbstreit is connected within the sport and we don’t want to discount what he’s saying, but anyone paying a $9.4 million buyout would be beyond unprecedented.
In the closest recent example we could find, Colorado State tried to fence in Jim McElwain with a $7.5 million buyout. The Gators eventually smoked McElwain out of Fort Collins thanks in large part to a public campaign run by then-AD Jeremy Foley that made the idea of McElwain returning to Colorado State impossible for both sides, but Florida wasn’t stroking a $7.5 million check to get him. Instead, Florida paid $3 million, McElwain himself covered $2 million and Florida made up the difference by paying Colorado State $2 million for a game at The Swamp in 2018.
That was Florida, a blue blood of college football, and still $7.5 million was too much for them. And Campbell’s buyout in nearly $2 million higher than that. Now add in the $4 or $5 million salary Campbell would command, a similar number he would require to pay his staff, and you’re looking at a price tag approaching $20 million in the first year alone to extract Campbell from Ames.
Nothing is impossible in college football, but paying nearly $10 million to get a coach out of a job that (he says) is very important to him — a coach that could (quite understandably) be 5-5 next week after Iowa State finishes a 3-game swing that includes No. 4 TCU, No. 22 West Virginia and No. 11 Oklahoma State — would be unheard of.
|
from utils.header import Header, NonEncodingField
from utils.commafy import commafy
class SymbolTableBase(Header):
FIELDS = (
NonEncodingField('desc'),
)
def __init__(self, entry_type, num_entries=None):
if num_entries is not None:
desc = '%s %s' % (commafy(num_entries), entry_type)
else:
desc = entry_type
super(SymbolTableBase, self).__init__('SymbolTable: %s' % desc, desc=desc)
class SymbolTable(SymbolTableBase):
SYM_INDEX = 0
N_STRX = 1
N_TYPE = 2
N_SECT = 3
N_DESC = 4
N_VALUE = 5
SYM_NAME = 6
def __init__(self, num_symbols):
super(SymbolTable, self).__init__('symbol entries', num_symbols)
self.symbols = list()
def add(self, nlist):
idx = len(self.symbols)
self.symbols.append((idx, nlist.n_strx, nlist.n_type, nlist.n_sect, nlist.n_desc, nlist.n_value, None))
def correlate_string_table(self, sym_str_tab):
assert isinstance(sym_str_tab, SymbolStringTable)
for idx in xrange(len(self.symbols)):
n_strx = self.symbols[idx][self.N_STRX]
if n_strx == 0:
continue
sym_name = sym_str_tab.symbol_strings.get(n_strx, None)
if sym_name is not None:
self.symbols[idx] = self.symbols[idx][:self.SYM_NAME] + (sym_name,)
def filter(self, pattern=None):
if pattern is None:
return range(len(self.symbols))
indices = list()
for (sym_idx, (index, n_strx, n_type, n_sect, n_desc, n_value, symbol_name)) in enumerate(self.symbols):
if pattern in symbol_name:
indices.append(sym_idx)
return indices
class SymbolStringTable(SymbolTableBase):
def __init__(self):
super(SymbolStringTable, self).__init__('string table')
self.symbol_strings = dict()
def add(self, n_strx, s):
self.symbol_strings[n_strx] = s
class IndirectSymbolTable(SymbolTableBase):
def __init__(self, num_indirect_symbols):
super(IndirectSymbolTable, self).__init__('indirect symbols', num_indirect_symbols)
class ExtRefSymbolTable(SymbolTableBase):
def __init__(self, num_ext_ref):
super(ExtRefSymbolTable, self).__init__('external references', num_ext_ref)
|
Episode 53 of Voices in AI features host Byron Reese and Nova Spivack talking about neurons, the Gaia hypothesis, intelligence, and quantum physics. Nova Spivack is a leading technology futurist, serial entrepreneur and angel investor.
Byron Reese: This is Voices in AI brought to you by GigaOm. I’m Byron Reese. Today, I’m excited we have Nova Spivack as our guest. Nova is an entrepreneur, a venture capitalist, an author; he’s a great many other things. He’s referred to by a wide variety of sources as a polymath, and he’s recently started a science and tech studio called Magical in which he serves as CEO.
He’s had his fingers in all sorts of pies and things that you’re probably familiar with. He was the first investor in Klout. He was in early on something that eventually became Siri. He was the co-founder of EarthWeb, Radar Network, The Daily Dot, Live Matrix. It sounds like he does more before breakfast than I manage to get done in a week. Welcome to the show, Nova.
Nova Spivack: Thank you! Very kind of you.
So, let’s start off with artificial intelligence. When I read what you write and when I watch videos about you, you have a very clear view of how you think the future is going to unfold with regards to technology and AI specifically. Can you just take whatever time you want and just describe for our listeners how you think the future is going to happen?
Sure, so I’ve been working in the AI field since long before it was popular to say that. I actually started while I was still in college working for Kurzweil in one of his companies, in an AI company that built the Kurzweil Reading Machine. I mean I was doing early neural network there, that was the end of the ‘80s or early ‘90s, and then I worked under Danny Hillis at Thinking Machines on supercomputing and AI related applications.
Then after that, I was involved in a company called Individual which was the first company to do intelligent agent powered news filtering and then began to start internet companies and worked in the semantic web, large scale collaborating filtering projects, [and] intelligence assistance. I advised a company called Next IT, which is one of the leading bot platforms and I’ve built a big data mining analytics company. So I’ve been deeply involved in this technology on a hands-on basis both as a scientist and even as an engineer in the early days [but also] from the marketing and business side and venture capital side. So, I really know this space.
First of all, it’s great to see AI in vogue again. I lived through the first AI winter and the second sort of unacknowledged AI winter around the birth and death of the semantic web, and now here we are in the neural network machine learning renaissance. It’s wonderful to see this happening. However, I think that the level of hype that we see is probably not calibrated with reality and that inevitably there’s going to be a period of disillusionment as some of the promises that have been made don’t pan out.
So, I think we have to keep a very realistic view of what this technology is and what it can and cannot do, and where it fits in the larger landscape of machine intelligence. So, we can talk about that today. I definitely have a viewpoint that’s different from some of the other pundits in the space in terms of when or if the singularity will happen, and in particular spent years thinking about and studying cognitive science and consciousness. And I have some views on that, based on a lot of research, that are probably be different from what we are hearing on the mainstream thinkers. So, I think it will be an interesting conversation today as we get into some of these questions, and probably get quite far into technology and philosophy.
|
# -*- coding: utf-8 -*-
import hashlib
import binascii
from thrift.transport.THttpClient import THttpClient
from thrift.protocol.TBinaryProtocol import TBinaryProtocol
from evernote.edam.userstore import UserStore
from evernote.edam.notestore import NoteStore
import evernote.edam.type.ttypes as Types
import evernote.edam.error.ttypes as Errors
from evernote.api.client import EvernoteClient
from .settings import EVERNOTE_NOTEBOOK
import logging
class Sink(object):
pass
class EvernoteSink(Sink):
def __init__(self, token, sandbox=False):
"""Initialize evernote connection.
Client connection handle is assigned to the client property.
Two properties user_store and note_store are provided for the convenience.
"""
self.token = token
self.client = EvernoteClient(token=self.token, sandbox=sandbox)
self.user_store = self.client.get_user_store()
self.note_store = self.client.get_note_store()
def image_resource(self, item):
#FIXME create pdf resource
md5 = hashlib.md5()
md5.update(item.content)
hashvalue = md5.digest()
data = Types.Data()
data.size = len(item.content) #FIXME better ways of doing this calculation?
data.bodyHash = hashvalue
data.body = item.content
resource = Types.Resource()
resource.mime = item.content_type
resource.data = data
return resource
def pdf_resource(self, item):
#FIXME create pdf resource
md5 = hashlib.md5()
md5.update(item.content)
hashvalue = md5.digest()
data = Types.Data()
data.size = len(item.content) #FIXME better ways of doing this calculation?
data.bodyHash = hashvalue
data.body = item.content
resource = Types.Resource()
resource.mime = 'application/pdf'
resource.data = data
return resource
def note_attribute(self, source_url=''):
attributes = Types.NoteAttributes()
attributes.sourceURL = source_url
return attributes
def create_note(self, title, content, notebook_name='', tags='', attributes=None, resources=None):
note = Types.Note()
note.title = title
if attributes:
note.attributes = attributes
if tags:
note.tagNames = [t.encode('utf-8', 'xmlcharrefreplace') for t in tags.split()] # Assuming no spaces in tags
logging.debug(note.tagNames)
if notebook_name:
notebooks = self.note_store.listNotebooks(self.token)
for notebook in notebooks:
if notebook.name == notebook_name:
note.notebookGuid = notebook.guid
break
else:
pass # create a note in default notebook
note.content = """<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE en-note SYSTEM "http://xml.evernote.com/pub/enml2.dtd">
<en-note>{}""".format(content.encode('utf-8', 'xmlcharrefreplace'))
if resources:
note.resources = resources
for r in resources:
note.content += """<en-media type="{}" hash="{}"/>""".format(r.mime, binascii.hexlify(r.data.bodyHash))
note.content += "</en-note>"
logging.debug(note.content)
created_note = self.note_store.createNote(self.token, note)
return created_note
def push(self, item):
kwargs = {
'title': item.title.encode('utf-8', 'xmlcharrefreplace'),
'content': item.body,
'tags': item.tags,
'notebook_name': EVERNOTE_NOTEBOOK,
'attributes': self.note_attribute(item.url),
}
if item.itemtype == 'PDF':
resource = self.pdf_resource(item)
kwargs['resources'] = [resource]
elif item.itemtype == 'image':
resource = self.image_resource(item)
kwargs['resources'] = [resource]
elif item.itemtype == 'HTML':
#FIXME check for image inside and create image resources
kwargs['content'] = item.content
elif item.itemtype == 'text':
kwargs['content'] = item.content
else:
# XXX Assuming plaintext type
# Should I raise exception for unknown items?
item.itemtype = 'text'
self.create_note(**kwargs)
class Database(Sink):
pass
|
Wish you had more attendees at your open house? With social media you can increase attendance virtually! Record your open house on Facebook Live, Instagram Live, or Snapchat. What’s also great about Facebook and Instagram Live, is that your followers will receive a notification once you’ve gone live.
Do you farm a particular subdivision or neighborhood enclave? Consider investing in aerial or “drone” video footage. Many agents are purchasing their own drowns for video marketing purposes. You’ll capture more buyer and seller leads in your neighborhood when you prove you are the expert in your farm.
66% of millennials are first time home buyers according to the National Association of REALTOR®’s Home Buyers & Sellers Generational Trends Report. Millennials also love social media and videos. In fact, according to a WordStream study, millennials watch online videos more than any other group. Creating how to or informational videos geared towards first time home buyers will target the right audience and help you capture more buyer leads.
When sellers begin preparing for sale, they’ll start looking at home repairs before they begin their search for an agent. Creating videos on sinkhole repairs, CO2 detectors, paining a home, etc. will increase your exposure to online sellers that are preparing for sale.
As an agent you must LOVE your community. By love we mean be an active, engaged member both online and off. To increase your online exposure with local home buyers and sellers, start recording local concerts, sporting events, fairs, and other events. Bonus points if you can add events to Facebook or Instagram Live.
Online reviews mean more now than ever. However, many online consumers have grown wary of the authenticity of online reviews with the increase of sponsored reviews on online. Video testimonials make your services much more credible and believable. If you have a past client that is either a close friend or found your services to be outstanding, ask if they would be able to participate in an online testimonial.
Many buyers and sellers want to get to know you personally, before they accept your services. Real estate is a people business after all. One way to stand apart from other agents online is to conduct videos about you and your credentials. Ask a friend to film you or use your laptop's video camera.
Want to reach more buyers and sellers online without filming a video? Zurple generates buyer and seller leads for real estate agents through social media and search engine marketing services. Each Zurple agent is ensured a steady flow of leads and traffic from paid services such as Google or Facebook ads. To see if Zurple still has leads available in your market, click the link below.
|
#!/usr/bin/env python
experiment_dir = '/Users/eija/Documents/FinnBrain/pipelinedata'
DTIprep_protocol = '/Users/eija/Documents/FinnBrain/scripts/default.xml'
from argparse import ArgumentParser
import os
import math
import numpy as np
import glob
import dicom
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--basedir", dest="basedir", help="base directory for image data", required=True)
args = parser.parse_args()
# Go through all patient subdirectories
DICOMbasedirs = glob.glob(args.basedir + os.sep + '*')
for DICOMbasedir in DICOMbasedirs:
#print "READING BASE DICOM [" + DICOMbasedir + "]"
StudyDirs = glob.glob(DICOMbasedir + os.sep + '*')
# Take first file of first subdirectory
for StudyDir in StudyDirs:
SeriesDirs = glob.glob(StudyDir + os.sep + '*')
break;
SeriesDir = SeriesDirs[0]
#print "READING DTI DICOM STUDY [" + SeriesDir + "]"
try:
filenames = os.listdir(SeriesDir)
ds = dicom.read_file(os.path.join(SeriesDir, filenames[0]))
except Exception as inst:
print type(inst) # the exception instance
print inst.args # arguments stored in .args
print inst # __str__ allows args to be printed directly
print ds.PatientsName
|
The act of volunteering is a self-less one and a wonderful way to make a difference. Volunteering requires dedication, commitment and the desire to learn. It takes many forms, from a life-time activity to a one time event. We would love to hear from you about how you would like to get involved! Please complete and submit our volunteer application below - we'll be in touch!
|
# -----------------------------------------------------------------------
# Copyright (c) 2018 Jendrik Seipp
#
# RedNotebook is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# RedNotebook is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with RedNotebook; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
# -----------------------------------------------------------------------
import ctypes
import logging
import sys
from gi.repository import Gdk, GObject, Gtk
from rednotebook.util import filesystem
try:
from cefpython3 import cefpython as cef
except ImportError as err:
cef = None
if filesystem.IS_WIN:
logging.info(
"CEF Python not found. Disabling clouds and"
' in-app previews. Error message: "{}"'.format(err)
)
if cef:
class HtmlView(Gtk.DrawingArea):
NOTEBOOK_URL = "file:///"
"""
Loading HTML strings only works if we pass the `url` parameter to
CreateBrowserSync.
When we call load_html() the first time, the browser is not yet
created. Therefore, we store the initial html and load it when
the browser is created.
"""
def __init__(self):
super().__init__()
self._browser = None
self._win32_handle = None
self._initial_html = ""
sys.excepthook = cef.ExceptHook # To shutdown CEF processes on error.
cef.Initialize(settings={"context_menu": {"enabled": False}})
GObject.threads_init()
GObject.timeout_add(10, self.on_timer)
self.connect("configure-event", self.on_configure)
self.connect("size-allocate", self.on_size_allocate)
self.connect("focus-in-event", self.on_focus_in)
self.connect("realize", self.on_realize)
def load_html(self, html):
if self._browser:
self._browser.GetMainFrame().LoadString(html, self.NOTEBOOK_URL)
else:
self._initial_html = html
def set_font_size(self, size):
pass
def get_handle(self):
Gdk.threads_enter()
ctypes.pythonapi.PyCapsule_GetPointer.restype = ctypes.c_void_p
ctypes.pythonapi.PyCapsule_GetPointer.argtypes = [ctypes.py_object]
gpointer = ctypes.pythonapi.PyCapsule_GetPointer(
self.get_property("window").__gpointer__, None
)
# The GTK 3.22 stack needs "gdk-3-3.0.dll".
libgdk = ctypes.CDLL("libgdk-3-0.dll")
handle = libgdk.gdk_win32_window_get_handle(gpointer)
Gdk.threads_leave()
return handle
def on_timer(self):
cef.MessageLoopWork()
return True
def on_realize(self, *_):
self._embed_browser()
def _embed_browser(self):
window_info = cef.WindowInfo()
self._win32_handle = self.get_handle()
window_info.SetAsChild(self._win32_handle)
self._browser = cef.CreateBrowserSync(window_info, url=self.NOTEBOOK_URL)
self._browser.SetClientCallback("OnBeforeBrowse", self.on_before_browse)
self._browser.SetClientCallback("OnAddressChange", self.on_address_change)
self.load_html(self._initial_html)
self._initial_html = None
@GObject.Signal(name="on-url-clicked", arg_types=(str,))
def url_clicked_signal(self, url):
logging.debug("Emitting on-url-clicked signal: %s", url)
def on_before_browse(self, browser, frame, request, **_):
url = request.GetUrl()
# For some reason GetUrl() appends slash to the returned URL so we need to compensate for it:
# (https://bugs.chromium.org/p/chromium/issues/detail?id=339054 might be the cause)
if url == self.NOTEBOOK_URL + "/":
# On first invocation the url points to dummy NOTEBOOK_URL.
# There is no reason to emit signal for it.
return False
self.url_clicked_signal.emit(url)
return True
def on_address_change(self, browser, frame, url):
if url == self.NOTEBOOK_URL:
return
self.url_clicked_signal.emit(url)
def on_configure(self, *_):
if self._browser:
self._browser.NotifyMoveOrResizeStarted()
return False
def on_size_allocate(self, _, data):
if self._browser:
cef.WindowUtils().OnSize(self._win32_handle, 0, 0, 0)
def on_focus_in(self, *_):
if self._browser:
self._browser.SetFocus(True)
return True
return False
def shutdown(self, *_):
if self._browser:
self._browser.CloseBrowser(True)
# Clear browser references that you keep anywhere in your
# code. All references must be cleared for CEF to shutdown cleanly.
self._browser = None
cef.Shutdown()
|
According to a new report North America Security as a Service Market, published by KBV research, the North America Security as a Service Market would witness market growth of 14.9% CAGR during the forecast period (2018 – 2024).
The US market would dominate the North America Network Security as a Service Market by Country by 2024, growing at a CAGR of 13.3 % during the forecast period. The Canada market is expected to witness a CAGR of 16.7% during (2018 - 2024). Additionally, The Mexico market is expected to witness a CAGR of 15.6% during (2018 - 2024).
The BFSI market dominated the North America Security as a Service Market by End User 2017. The Telecom & IT market is expected to witness a CAGR of 14% during (2018 - 2024). The Healthcare market is expected to witness a CAGR of 16.1% during (2018 - 2024). Additionally, The Manufacturing market is expected to witness highest CAGR of 15.7% during (2018 - 2024).
The Solution market dominated the Mexico Security as a Service Market by Component 2017, growing at a CAGR of 15.3 % during the forecast period. The Services market is expected to witness a CAGR of 19.3% during (2018 - 2024).
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import inspect
import numpy as np
import emcee
import george
from george import kernels
import os
import sys
currentframe = inspect.currentframe()
currentdir = os.path.dirname(os.path.abspath(inspect.getfile(currentframe)))
parentdir = os.path.dirname(currentdir)
sys.path.insert(0, parentdir)
import profiles
import gpew
def single_kernel_noisemodel(p):
"""
Simple one squared-exponential kernel noise model.
"""
return george.GP(p[0] * kernels.ExpSquaredKernel(p[1]))
def single_kernel_lnprior(p):
amp, xcen, sigma, lna, lnalpha = p
if (-50. < lna < 0. and amp > 0. and sigma > 0. and xcen > 8685 and
xcen < 8690):
return 0.0
return -np.inf
# Load the spectrum
d = np.loadtxt('spec.txt').T
# Select the region around the line
sel = (d[0] > 8680) & (d[0] < 8696)
# Come up with uncertainties for S/N = 100
yerr = np.ones_like(d[0][sel]) * 0.01
# Store the line in the lines array
lines = [(d[0][sel], d[1][sel], yerr)]
# Define the profile for the line
pfiles = [profiles.gaussian]
# Generate the array that stores how many parameters each profile
# has. There is only one and we are using a Gaussian profile so we
# now we have 3 parameters but this way we don't need to think about it.
pparn = np.cumsum([0] +\
[len(inspect.getargspec(i)[0]) - 1 for i in pfiles])
# Initial values for the parameters. The first three are for the Gaussian
# profile, the next two for the one kernel GP noise model. The values
# should be close to the optimal (this is important).
initial = [0.28, # profile amplitude
8687.82, # profile center wavelength
1.53, # profile sigma
-6.1, # kernel amplitude
0.3 # kernel scale-length
]
# Sampler initialization
nwalkers = 128
ndim = len(initial)
# 100 is not enough! Make sure the convergence is satisfacory before
# accepting any results!
niter = 100
# Replace with None to get a trivial chi2 like noise model
noisemodel = single_kernel_noisemodel
data = [lines, pfiles, pparn, noisemodel, single_kernel_lnprior]
# Initial states of the walkers - N-dim Gaussian around the initial values
p0 = np.array([np.array(initial) + 1e-2 * np.random.randn(ndim)
for i in xrange(nwalkers)])
# Sampler object
sampler = emcee.EnsembleSampler(nwalkers, ndim, gpew.lnprob, args=data)
# Let's run it!
p0, lnp, _ = sampler.run_mcmc(p0, niter)
sampler.reset()
# Let's get the best lnp value, re-initialize it and run it again.
p = p0[np.argmax(lnp)]
p0 = [p + 1e-2 * np.random.randn(ndim) for i in xrange(nwalkers)]
p0, _, _ = sampler.run_mcmc(p0, niter)
# Collect the samples
samples = sampler.flatchain
# Plot stuff:
# error bars: observed line
# red: +-1 sigma of the complete model
# blue: +-1 sigma of the profile model
# gpew.plot_lines(lines, pfiles, pparn, single_kernel_noisemodel, samples,
gpew.plot_lines(lines, pfiles, pparn, noisemodel, samples,
nwalkers, wlwidth=8.1, gpsamples=100)
|
This page describes checks supported by go-critic linter.
checker is enabled by default.
checker is disabled by default.
Diagnostics try to find programming errors in the code. They also detect code that may be correct, but looks suspicious.
All diagnostics are enabled by default (unless it has “experimental” tag).
Style checks suggest replacing some form of expression/statement with another one that is considered more idiomatic or simple.
Only non-opinionated style checks are enabled by default.
Performance checks tell you about potential issues that can make your code run slower than it could be.
All performance checks are disabled by default.
Detects suspicious append result assignments.
Detects append chains to the same slice that can be done in a single append call.
Detects assignments that can be simplified by using assignment operators.
Detects bool expressions that can be simplified.
Detects when predeclared identifiers shadowed in assignments.
Detects capitalized names for local variables.
Detects erroneous case order inside switch statements.
Detects malformed ‘code generated’ file comments.
// Code generated by foogen. DO NOT EDIT.
Detects comments with non-idiomatic formatting.
Detects commented-out code inside function bodies.
Detects when default case in switch isn’t on 1st or last position.
Detects comments that silence go lint complaints about doc-comment.
// Foo is a demonstration-only function.
Detects duplicated branch bodies inside conditional statements.
Detects duplicated case clauses inside switch statements.
Detects else with nested if statement that can be replaced with else-if.
Detects fallthrough that can be avoided by using multi case values.
Detects empty string checks that can be written more idiomatically.
Detects unoptimal strings/bytes case-insensitive comparison.
Detects calls to exit/fatal inside functions that use defer.
Detects immediate dereferencing of flag package pointers.
Suggests to use pointer to array to avoid the copy using & on range expression.
Dereferencing returned pointers will lead to hard to find errors where flag values are not updated after flag.Parse().
Detects flag names with whitespace.
Detects hex literals that have mixed case letter digits.
Detects params that incur excessive amount of copying.
Detects repeated if-else statements and suggests to replace them with switch statement.
Permits single else or else-if; repeated else-if or else + else-if will trigger suggestion to use switch statement. See EffectiveGo#switch.
Detects when imported package names shadowed in the assignments.
Detects strings.Index calls that may cause unwanted allocs.
Detects non-assignment statements inside if/switch init clause.
Detects method expression call that can be replaced with a method call.
Finds where nesting level could be reduced.
Detects return statements those results evaluate to nil.
// (B) - typo in "==", change to "!="
Detects octal literals passed to functions.
Detects various off-by-one kind of errors.
Detects if function parameters could be combined by type and suggest the way to do it.
Detects input and output parameters that have a type of pointer to referential type.
Detects expensive copies of for loop range expressions.
See Go issue for details: https://github.com/golang/go/issues/15812.
Detects loops that copy big objects during each iteration.
Suggests to use index access or take address and make use pointer instead.
Detects regexp.Compile* that can be replaced with regexp.MustCompile*.
Detects switch statements that could be better written as if statement.
Detects usage of len when result is obvious or doesn’t make sense.
Detects redundant conversions between string and byte.
Detects switch-over-bool statements that use explicit true tag value.
Detects repeated type assertions and suggests to replace them with type switch statement.
// Code A, uses x.
// Code B, uses x.
// Code C, uses x.
Detects type switches that can benefit from type guard clause with variable.
Detects unneded parenthesis inside type expressions and suggests to remove them.
Detects dereference expressions that can be omitted.
Detects function literals that can be simplified.
Detects unnamed results that may benefit from names.
Detects unnecessary braced statement blocks.
Detects slice expressions that can be simplified to sliced expression itself.
Detects value swapping code that are not using parallel assignment.
Detects conditions that are unsafe due to not being exhaustive.
Detects function calls that can be replaced with convenience wrappers.
Detects Yoda style expressions and suggests to replace them.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import copy
from math import ceil
from IPy import IP
import requests
import json
from uuid import uuid4
import random
import time
import jimit as ji
from flask import Blueprint, url_for, request
from jimvc.api.base import Base
from jimvc.models.initialize import dev_table
from jimvc.models import app_config
from jimvc.models import GuestState
from jimvc.models import Service
from jimvc.models import IPPool
from jimvc.models import ReservedIP
from jimvc.models import DiskState, Host
from jimvc.models import Database as db
from jimvc.models import Config
from jimvc.models import Disk
from jimvc.models import Rules
from jimvc.models import Utils
from jimvc.models import Guest
from jimvc.models import OSTemplateImage
from jimvc.models import OSTemplateProfile
from jimvc.models import OSTemplateInitializeOperate
from jimvc.models import GuestXML
from jimvc.models import SSHKeyGuestMapping
from jimvc.models import SSHKey
from jimvc.models import Snapshot
from jimvc.models import status
__author__ = 'James Iter'
__date__ = '2017/3/22'
__contact__ = 'james.iter.cn@gmail.com'
__copyright__ = '(c) 2017 by James Iter.'
blueprint = Blueprint(
'api_guest',
__name__,
url_prefix='/api/guest'
)
blueprints = Blueprint(
'api_guests',
__name__,
url_prefix='/api/guests'
)
guest_base = Base(the_class=Guest, the_blueprint=blueprint, the_blueprints=blueprints)
os_template_image_base = Base(the_class=OSTemplateImage, the_blueprint=blueprint, the_blueprints=blueprints)
os_template_profile_base = Base(the_class=OSTemplateProfile, the_blueprint=blueprint, the_blueprints=blueprints)
@Utils.dumps2response
def r_create():
args_rules = [
Rules.CPU.value,
Rules.MEMORY.value,
Rules.BANDWIDTH.value,
Rules.BANDWIDTH_UNIT.value,
Rules.OS_TEMPLATE_IMAGE_ID.value,
Rules.QUANTITY.value,
Rules.REMARK.value,
Rules.PASSWORD.value,
Rules.LEASE_TERM.value
]
if 'node_id' in request.json:
args_rules.append(
Rules.NODE_ID.value
)
if 'ssh_keys_id' in request.json:
args_rules.append(
Rules.SSH_KEYS_ID.value
)
if 'service_id' in request.json:
args_rules.append(
Rules.SERVICE_ID.value
)
if 'autostart' in request.json:
args_rules.append(
Rules.AUTOSTART.value
)
try:
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
ji.Check.previewing(args_rules, request.json)
config = Config()
config.id = 1
config.get()
os_template_image = OSTemplateImage()
os_template_profile = OSTemplateProfile()
os_template_image.id = request.json.get('os_template_image_id')
if not os_template_image.exist():
ret['state'] = ji.Common.exchange_state(40450)
ret['state']['sub']['zh-cn'] = ''.join([ret['state']['sub']['zh-cn'], ': ', os_template_image.id.__str__()])
return ret
os_template_image.get()
os_template_profile.id = os_template_image.os_template_profile_id
os_template_profile.get()
os_template_initialize_operates, os_template_initialize_operates_count = \
OSTemplateInitializeOperate.get_by_filter(
filter_str='os_template_initialize_operate_set_id:eq:' +
os_template_profile.os_template_initialize_operate_set_id.__str__())
node_id = request.json.get('node_id', None)
# 默认只取可随机分配虚拟机的 hosts
available_hosts = Host.get_available_hosts(nonrandom=False)
# 当指定了 host 时,取全部活着的 hosts
if node_id is not None:
available_hosts = Host.get_available_hosts(nonrandom=None)
if available_hosts.__len__() == 0:
ret['state'] = ji.Common.exchange_state(50351)
return ret
available_hosts_mapping_by_node_id = dict()
for host in available_hosts:
if host['node_id'] not in available_hosts_mapping_by_node_id:
available_hosts_mapping_by_node_id[host['node_id']] = host
if node_id is not None and node_id not in available_hosts_mapping_by_node_id:
ret['state'] = ji.Common.exchange_state(50351)
return ret
ssh_keys_id = request.json.get('ssh_keys_id', list())
ssh_keys = list()
ssh_key_guest_mapping = SSHKeyGuestMapping()
if ssh_keys_id.__len__() > 0:
rows, _ = SSHKey.get_by_filter(
filter_str=':'.join(['id', 'in', ','.join(_id.__str__() for _id in ssh_keys_id)]))
for row in rows:
ssh_keys.append(row['public_key'])
# 确保目标 服务组 存在
service = Service()
service.id = request.json.get('service_id', 1)
service.get()
bandwidth = request.json.get('bandwidth')
bandwidth_unit = request.json.get('bandwidth_unit')
if bandwidth_unit == 'k':
bandwidth = bandwidth * 1000
elif bandwidth_unit == 'm':
bandwidth = bandwidth * 1000 ** 2
elif bandwidth_unit == 'g':
bandwidth = bandwidth * 1000 ** 3
else:
ret = dict()
ret['state'] = ji.Common.exchange_state(41203)
raise ji.PreviewingError(json.dumps(ret, ensure_ascii=False))
# http://man7.org/linux/man-pages/man8/tc.8.html
# 如果带宽大于 tc 所控最大速率,则置其为无限带宽
# 34359738360 等于 tc 最大可控字节速率,换算出的比特位
if bandwidth > 34359738360:
bandwidth = 0
quantity = request.json.get('quantity')
occupied_ips = list()
occupied_vnc_ports = list()
rows, count = Guest.get_all()
for row in rows:
occupied_ips.append(row['ip'])
occupied_vnc_ports.append(row['vnc_port'])
rows, count = ReservedIP.get_all()
for row in rows:
occupied_ips.append(row['ip'])
rows, count = IPPool.get_by_filter(filter_str=':'.join(['activity', 'eq', '1']))
if count < 1:
ret['state'] = ji.Common.exchange_state(50350)
return ret
ip_pool = IPPool()
ip_pool.id = rows[0]['id']
ip_pool.get()
guest_ip_generator = ip_pool.ip_generator(occupied_ips=occupied_ips)
guest_vnc_port_generator = ip_pool.vnc_port_generator(occupied_vnc_ports=occupied_vnc_ports)
while quantity:
quantity -= 1
guest = Guest()
guest.uuid = uuid4().__str__()
guest.cpu = request.json.get('cpu')
# 虚拟机内存单位,模板生成方法中已置其为GiB
guest.memory = request.json.get('memory')
guest.bandwidth = bandwidth
guest.os_template_image_id = request.json.get('os_template_image_id')
guest.label = ji.Common.generate_random_code(length=8)
guest.remark = request.json.get('remark', '')
guest.autostart = request.json.get('autostart', False)
guest.password = request.json.get('password')
if guest.password is None or guest.password.__len__() < 1:
guest.password = ji.Common.generate_random_code(length=16)
guest.ip = guest_ip_generator.next()
guest.vnc_port = guest_vnc_port_generator.next()
guest.network = config.vm_network
guest.manage_network = config.vm_manage_network
guest.vnc_password = ji.Common.generate_random_code(length=16)
disk = Disk()
disk.uuid = guest.uuid
disk.remark = guest.label.__str__() + '_SystemImage'
disk.format = 'qcow2'
disk.sequence = 0
disk.size = 0
disk.path = config.storage_path + '/' + disk.uuid + '.' + disk.format
disk.guest_uuid = ''
# disk.node_id 由 guest 事件处理机更新。涉及迁移时,其所属 node_id 会变更。参见 @models/event_processory.py:111 附近。
disk.node_id = 0
disk.quota(config=config)
disk.create()
if node_id is None:
# 在可用计算节点中平均分配任务
chosen_host = available_hosts[quantity % available_hosts.__len__()]
else:
chosen_host = available_hosts_mapping_by_node_id[node_id]
guest.node_id = chosen_host['node_id']
guest.service_id = service.id
guest_xml = GuestXML(host=chosen_host, guest=guest, disk=disk, config=config,
os_type=os_template_profile.os_type)
guest.xml = guest_xml.get_domain()
guest.node_id = int(guest.node_id)
guest.create()
ssh_key_guest_mapping.guest_uuid = guest.uuid
if ssh_keys_id.__len__() > 0:
for ssh_key_id in ssh_keys_id:
ssh_key_guest_mapping.ssh_key_id = ssh_key_id
ssh_key_guest_mapping.create()
if os_template_profile.os_distro == 'coreos':
ip_pool.netmask = IP(guest.ip).make_net(ip_pool.netmask).prefixlen().__str__()
# 替换占位符为有效内容
_os_template_initialize_operates = copy.deepcopy(os_template_initialize_operates)
for k, v in enumerate(_os_template_initialize_operates):
_os_template_initialize_operates[k]['content'] = v['content'].replace('{IP}', guest.ip).\
replace('{HOSTNAME}', guest.label). \
replace('{PASSWORD}', guest.password). \
replace('{NETMASK}', ip_pool.netmask).\
replace('{GATEWAY}', ip_pool.gateway).\
replace('{DNS1}', ip_pool.dns1).\
replace('{DNS2}', ip_pool.dns2). \
replace('{SSH-KEY}', '\n'.join(ssh_keys))
_os_template_initialize_operates[k]['command'] = v['command'].replace('{IP}', guest.ip). \
replace('{HOSTNAME}', guest.label). \
replace('{PASSWORD}', guest.password). \
replace('{NETMASK}', ip_pool.netmask). \
replace('{GATEWAY}', ip_pool.gateway). \
replace('{DNS1}', ip_pool.dns1). \
replace('{DNS2}', ip_pool.dns2). \
replace('{SSH-KEY}', '\n'.join(ssh_keys))
message = {
'_object': 'guest',
'action': 'create',
'uuid': guest.uuid,
'storage_mode': config.storage_mode,
'dfs_volume': config.dfs_volume,
'node_id': guest.node_id,
'autostart': guest.autostart,
'name': guest.label,
'template_path': os_template_image.path,
'os_type': os_template_profile.os_type,
'disks': [disk.__dict__],
'xml': guest_xml.get_domain(),
'os_template_initialize_operates': _os_template_initialize_operates,
'passback_parameters': {}
}
Utils.emit_instruction(message=json.dumps(message, ensure_ascii=False))
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_autostart(uuids, autostart):
args_rules = [
Rules.UUIDS.value,
Rules.AUTOSTART.value
]
if str(autostart).lower() in ['false', '0']:
autostart = False
else:
autostart = True
try:
ji.Check.previewing(args_rules, {'uuids': uuids, 'autostart': autostart})
guest = Guest()
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
message = {
'_object': 'guest',
'action': 'autostart',
'uuid': uuid,
'node_id': guest.node_id,
'autostart': autostart,
'passback_parameters': {'autostart': autostart}
}
Utils.emit_instruction(message=json.dumps(message))
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_reboot(uuids):
args_rules = [
Rules.UUIDS.value
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids})
guest = Guest()
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
message = {
'_object': 'guest',
'action': 'reboot',
'uuid': uuid,
'node_id': guest.node_id
}
Utils.emit_instruction(message=json.dumps(message))
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_force_reboot(uuids):
args_rules = [
Rules.UUIDS.value
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids})
guest = Guest()
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
disks, _ = Disk.get_by_filter(filter_str=':'.join(['guest_uuid', 'eq', guest.uuid]))
message = {
'_object': 'guest',
'action': 'force_reboot',
'uuid': uuid,
'node_id': guest.node_id,
'disks': disks
}
Utils.emit_instruction(message=json.dumps(message))
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_shutdown(uuids):
args_rules = [
Rules.UUIDS.value
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids})
guest = Guest()
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
message = {
'_object': 'guest',
'action': 'shutdown',
'uuid': uuid,
'node_id': guest.node_id
}
Utils.emit_instruction(message=json.dumps(message))
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_force_shutdown(uuids):
args_rules = [
Rules.UUIDS.value
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids})
guest = Guest()
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
message = {
'_object': 'guest',
'action': 'force_shutdown',
'uuid': uuid,
'node_id': guest.node_id
}
Utils.emit_instruction(message=json.dumps(message))
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_boot(uuids):
# TODO: 做好关系依赖判断,比如boot不可以对suspend的实例操作。
args_rules = [
Rules.UUIDS.value
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids})
guest = Guest()
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
config = Config()
config.id = 1
config.get()
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
disks, _ = Disk.get_by_filter(filter_str=':'.join(['guest_uuid', 'eq', guest.uuid]))
message = {
'_object': 'guest',
'action': 'boot',
'uuid': uuid,
'node_id': guest.node_id,
'passback_parameters': {},
'disks': disks
}
Utils.emit_instruction(message=json.dumps(message))
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_suspend(uuids):
args_rules = [
Rules.UUIDS.value
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids})
guest = Guest()
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
message = {
'_object': 'guest',
'action': 'suspend',
'uuid': uuid,
'node_id': guest.node_id
}
Utils.emit_instruction(message=json.dumps(message))
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_resume(uuids):
args_rules = [
Rules.UUIDS.value
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids})
guest = Guest()
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
message = {
'_object': 'guest',
'action': 'resume',
'uuid': uuid,
'node_id': guest.node_id
}
Utils.emit_instruction(message=json.dumps(message))
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_delete(uuids):
args_rules = [
Rules.UUIDS.value
]
# TODO: 加入是否删除使用的数据磁盘开关,如果为True,则顺便删除使用的磁盘。否则解除该磁盘被使用的状态。
try:
ji.Check.previewing(args_rules, {'uuids': uuids})
guest = Guest()
# 检测所指定的 UUDIs 实例都存在
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
config = Config()
config.id = 1
config.get()
# 执行删除操作
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
message = {
'_object': 'guest',
'action': 'delete',
'uuid': uuid,
'storage_mode': config.storage_mode,
'dfs_volume': config.dfs_volume,
'node_id': guest.node_id
}
Utils.emit_instruction(message=json.dumps(message))
# 删除创建失败的 Guest
if guest.status == status.GuestState.dirty.value:
disk = Disk()
disk.uuid = guest.uuid
disk.get_by('uuid')
if disk.state == status.DiskState.pending.value:
disk.delete()
guest.delete()
SSHKeyGuestMapping.delete_by_filter(filter_str=':'.join(['guest_uuid', 'eq', guest.uuid]))
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_attach_disk(uuid, disk_uuid):
args_rules = [
Rules.UUID.value,
Rules.DISK_UUID.value
]
try:
ji.Check.previewing(args_rules, {'uuid': uuid, 'disk_uuid': disk_uuid})
guest = Guest()
guest.uuid = uuid
guest.get_by('uuid')
disk = Disk()
disk.uuid = disk_uuid
disk.get_by('uuid')
config = Config()
config.id = 1
config.get()
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
# 判断欲挂载的磁盘是否空闲
if disk.guest_uuid.__len__() > 0 or disk.state != DiskState.idle.value:
ret['state'] = ji.Common.exchange_state(41258)
return ret
# 判断 Guest 是否处于可用状态
if guest.status in (status.GuestState.no_state.value, status.GuestState.dirty.value):
ret['state'] = ji.Common.exchange_state(41259)
return ret
# 判断 Guest 与 磁盘是否在同一宿主机上
if config.storage_mode in [status.StorageMode.local.value, status.StorageMode.shared_mount.value]:
if guest.node_id != disk.node_id:
ret['state'] = ji.Common.exchange_state(41260)
return ret
# 通过检测未被使用的序列,来确定当前磁盘在目标 Guest 身上的序列
disk.guest_uuid = guest.uuid
disks, count = disk.get_by_filter(filter_str='guest_uuid:in:' + guest.uuid)
already_used_sequence = list()
for _disk in disks:
already_used_sequence.append(_disk['sequence'])
for sequence in range(0, dev_table.__len__()):
if sequence not in already_used_sequence:
disk.sequence = sequence
break
disk.state = DiskState.mounting.value
guest_xml = GuestXML(guest=guest, disk=disk, config=config)
message = {
'_object': 'guest',
'action': 'attach_disk',
'uuid': uuid,
'node_id': guest.node_id,
'xml': guest_xml.get_disk(),
'passback_parameters': {'disk_uuid': disk.uuid, 'sequence': disk.sequence},
'disks': [disk.__dict__]
}
Utils.emit_instruction(message=json.dumps(message))
disk.update()
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_detach_disk(disk_uuid):
args_rules = [
Rules.DISK_UUID.value
]
try:
ji.Check.previewing(args_rules, {'disk_uuid': disk_uuid})
disk = Disk()
disk.uuid = disk_uuid
disk.get_by('uuid')
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
if disk.state != DiskState.mounted.value or disk.sequence == 0:
# 表示未被任何实例使用,已被分离
# 序列为 0 的表示实例系统盘,系统盘不可以被分离
# TODO: 系统盘单独范围其它状态
return ret
guest = Guest()
guest.uuid = disk.guest_uuid
guest.get_by('uuid')
# 判断 Guest 是否处于可用状态
if guest.status in (status.GuestState.no_state.value, status.GuestState.dirty.value):
ret['state'] = ji.Common.exchange_state(41259)
return ret
config = Config()
config.id = 1
config.get()
guest_xml = GuestXML(guest=guest, disk=disk, config=config)
message = {
'_object': 'guest',
'action': 'detach_disk',
'uuid': disk.guest_uuid,
'node_id': guest.node_id,
'xml': guest_xml.get_disk(),
'passback_parameters': {'disk_uuid': disk.uuid}
}
Utils.emit_instruction(message=json.dumps(message))
disk.state = DiskState.unloading.value
disk.update()
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_migrate(uuids, node_id):
args_rules = [
Rules.UUIDS.value,
Rules.NODE_ID.value
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids, 'node_id': node_id})
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
config = Config()
config.id = 1
config.get()
# 取全部活着的 hosts
available_hosts = Host.get_available_hosts(nonrandom=None)
if available_hosts.__len__() == 0:
ret['state'] = ji.Common.exchange_state(50351)
return ret
available_hosts_mapping_by_node_id = dict()
for host in available_hosts:
if host['node_id'] not in available_hosts_mapping_by_node_id:
available_hosts_mapping_by_node_id[host['node_id']] = host
dst_ip = available_hosts_mapping_by_node_id[node_id]['interfaces'][config.vm_manage_network]['ip']
guest = Guest()
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
# 忽略宕机计算节点 上面的 虚拟机 迁移请求
# 忽略目标计算节点 等于 当前所在 计算节点 的虚拟机 迁移请求
if guest.node_id.__str__() not in available_hosts_mapping_by_node_id or guest.node_id.__str__() == node_id:
continue
message = {
'_object': 'guest',
'action': 'migrate',
'uuid': uuid,
'node_id': guest.node_id,
'storage_mode': config.storage_mode,
'duri': 'qemu+ssh://' + dst_ip + '/system'
}
Utils.emit_instruction(message=json.dumps(message))
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_get(uuids):
ret = guest_base.get(ids=uuids, ids_rule=Rules.UUIDS.value, by_field='uuid')
if '200' != ret['state']['code']:
return ret
rows, _ = SSHKeyGuestMapping.get_by_filter(filter_str=':'.join(['guest_uuid', 'in', uuids]))
guest_uuid_ssh_key_id_mapping = dict()
ssh_keys_id = list()
for row in rows:
if row['ssh_key_id'] not in ssh_keys_id:
ssh_keys_id.append(row['ssh_key_id'].__str__())
if row['guest_uuid'] not in guest_uuid_ssh_key_id_mapping:
guest_uuid_ssh_key_id_mapping[row['guest_uuid']] = list()
guest_uuid_ssh_key_id_mapping[row['guest_uuid']].append(row['ssh_key_id'])
rows, _ = SSHKey.get_by_filter(filter_str=':'.join(['id', 'in', ','.join(ssh_keys_id)]))
ssh_key_id_mapping = dict()
for row in rows:
row['url'] = url_for('v_ssh_keys.show')
ssh_key_id_mapping[row['id']] = row
hosts_url = url_for('api_hosts.r_get_by_filter', _external=True)
hosts_ret = requests.get(url=hosts_url, cookies=request.cookies)
hosts_ret = json.loads(hosts_ret.content)
hosts_mapping_by_node_id = dict()
for host in hosts_ret['data']:
hosts_mapping_by_node_id[int(host['node_id'])] = host
if -1 == uuids.find(','):
if 'ssh_keys' not in ret['data']:
ret['data']['ssh_keys'] = list()
if ret['data']['uuid'] in guest_uuid_ssh_key_id_mapping:
for ssh_key_id in guest_uuid_ssh_key_id_mapping[ret['data']['uuid']]:
if ssh_key_id not in ssh_key_id_mapping:
continue
ret['data']['ssh_keys'].append(ssh_key_id_mapping[ssh_key_id])
if not hosts_mapping_by_node_id[ret['data']['node_id']]['alive']:
ret['data']['status'] = GuestState.no_state.value
else:
for i, guest in enumerate(ret['data']):
if 'ssh_keys' not in ret['data'][i]:
ret['data'][i]['ssh_keys'] = list()
if ret['data'][i]['uuid'] in guest_uuid_ssh_key_id_mapping:
for ssh_key_id in guest_uuid_ssh_key_id_mapping[ret['data'][i]['uuid']]:
if ssh_key_id not in ssh_key_id_mapping:
continue
ret['data'][i]['ssh_keys'].append(ssh_key_id_mapping[ssh_key_id])
if not hosts_mapping_by_node_id[ret['data'][i]['node_id']]['alive']:
ret['data'][i]['status'] = GuestState.no_state.value
return ret
def exchange_guest_os_templates_logo(os_templates_image_mapping_by_id=None, os_templates_profile_mapping_by_id=None,
os_template_image_id=None):
assert isinstance(os_templates_image_mapping_by_id, dict)
assert isinstance(os_templates_profile_mapping_by_id, dict)
assert isinstance(os_template_image_id, int)
if os_templates_image_mapping_by_id[os_template_image_id]['logo'] == "":
logo = os_templates_profile_mapping_by_id[os_templates_image_mapping_by_id[os_template_image_id]['os_template_profile_id']]['icon']
else:
logo = os_templates_image_mapping_by_id[os_template_image_id]['logo']
label = os_templates_image_mapping_by_id[os_template_image_id]['label']
return logo, label
def format_guest_status(_status, progress):
from jimvc.models import GuestState
color = 'FF645B'
icon = 'glyph-icon icon-bolt'
desc = '未知状态'
if _status == GuestState.booting.value:
color = '00BBBB'
icon = 'glyph-icon icon-circle'
desc = '启动中'
elif _status == GuestState.running.value:
color = '00BB00'
icon = 'glyph-icon icon-circle'
desc = '运行中'
elif _status == GuestState.creating.value:
color = 'FFC543'
icon = 'glyph-icon icon-spinner'
desc = ' '.join(['创建中', str(progress) + '%'])
elif _status == GuestState.blocked.value:
color = '3D4245'
icon = 'glyph-icon icon-minus-square'
desc = '被阻塞'
elif _status == GuestState.paused.value:
color = 'B7B904'
icon = 'glyph-icon icon-pause'
desc = '暂停'
elif _status == GuestState.shutdown.value:
color = '4E5356'
icon = 'glyph-icon icon-terminal'
desc = '关闭'
elif _status == GuestState.shutoff.value:
color = 'FFC543'
icon = 'glyph-icon icon-plug'
desc = '断电'
elif _status == GuestState.crashed.value:
color = '9E2927'
icon = 'glyph-icon icon-question'
desc = '已崩溃'
elif _status == GuestState.pm_suspended.value:
color = 'FCFF07'
icon = 'glyph-icon icon-anchor'
desc = '悬挂'
elif _status == GuestState.migrating.value:
color = '1CF5E7'
icon = 'glyph-icon icon-space-shuttle'
desc = '迁移中'
elif _status == GuestState.dirty.value:
color = 'FF0707'
icon = 'glyph-icon icon-remove'
desc = '创建失败,待清理'
else:
pass
return '<span class="{icon}" style="color: #{color};"> {desc}</span>'.format(
icon=icon, color=color, desc=desc)
def exchange_guest_bandwidth(bandwidth=None):
assert isinstance(bandwidth, int)
if bandwidth == 0:
bandwidth = '<span style="font-size: 16px;" title="无限带宽"> ∞</span>'
elif 0 < bandwidth < 1000 ** 2:
bandwidth = str(bandwidth // 1000) + ' Kbps'
elif 1000 ** 2 <= bandwidth < 1000 ** 3:
bandwidth = str(bandwidth // 1000 ** 2) + ' Mbps'
else:
bandwidth = str(bandwidth // 1000 ** 3) + ' Gbps'
return bandwidth
@Utils.dumps2response
def r_get_by_filter():
ret = guest_base.get_by_filter()
uuids = list()
for guest in ret['data']:
uuids.append(guest['uuid'])
rows, _ = SSHKeyGuestMapping.get_by_filter(filter_str=':'.join(['guest_uuid', 'in', ','.join(uuids)]))
guest_uuid_ssh_key_id_mapping = dict()
ssh_keys_id = list()
for row in rows:
if row['ssh_key_id'] not in ssh_keys_id:
ssh_keys_id.append(row['ssh_key_id'].__str__())
if row['guest_uuid'] not in guest_uuid_ssh_key_id_mapping:
guest_uuid_ssh_key_id_mapping[row['guest_uuid']] = list()
guest_uuid_ssh_key_id_mapping[row['guest_uuid']].append(row['ssh_key_id'])
rows, _ = SSHKey.get_by_filter(filter_str=':'.join(['id', 'in', ','.join(ssh_keys_id)]))
ssh_key_id_mapping = dict()
for row in rows:
row['url'] = url_for('v_ssh_keys.show')
ssh_key_id_mapping[row['id']] = row
rows, _ = Snapshot.get_by_filter(filter_str=':'.join(['guest_uuid', 'in', ','.join(uuids)]))
snapshots_guest_uuid_mapping = dict()
for row in rows:
guest_uuid = row['guest_uuid']
if guest_uuid not in snapshots_guest_uuid_mapping:
snapshots_guest_uuid_mapping[guest_uuid] = list()
snapshots_guest_uuid_mapping[guest_uuid].append(row)
hosts_url = url_for('api_hosts.r_get_by_filter', _external=True)
hosts_ret = requests.get(url=hosts_url, cookies=request.cookies)
hosts_ret = json.loads(hosts_ret.content)
hosts_mapping_by_node_id = dict()
for host in hosts_ret['data']:
hosts_mapping_by_node_id[int(host['node_id'])] = host
os_templates_image, _ = OSTemplateImage.get_by_filter()
os_templates_image_mapping_by_id = dict()
for os_template_image in os_templates_image:
os_templates_image_mapping_by_id[os_template_image['id']] = os_template_image
os_templates_profile, _ = OSTemplateProfile.get_by_filter()
os_templates_profile_mapping_by_id = dict()
for os_template_profile in os_templates_profile:
os_templates_profile_mapping_by_id[os_template_profile['id']] = os_template_profile
for i, guest in enumerate(ret['data']):
guest_uuid = ret['data'][i]['uuid']
if 'ssh_keys' not in ret['data'][i]:
ret['data'][i]['ssh_keys'] = list()
if guest_uuid in guest_uuid_ssh_key_id_mapping:
for ssh_key_id in guest_uuid_ssh_key_id_mapping[guest_uuid]:
if ssh_key_id not in ssh_key_id_mapping:
continue
ret['data'][i]['ssh_keys'].append(ssh_key_id_mapping[ssh_key_id])
if 'snapshot' not in ret['data'][i]:
ret['data'][i]['snapshot'] = {
'creatable': True,
'mapping': list()
}
if guest_uuid in snapshots_guest_uuid_mapping:
ret['data'][i]['snapshot']['mapping'] = snapshots_guest_uuid_mapping[guest_uuid]
for snapshot in snapshots_guest_uuid_mapping[guest_uuid]:
if snapshot['progress'] == 100:
continue
else:
ret['data'][i]['snapshot']['creatable'] = False
if not hosts_mapping_by_node_id[ret['data'][i]['node_id']]['alive']:
ret['data'][i]['status'] = GuestState.no_state.value
ret['data'][i]['hostname'] = hosts_mapping_by_node_id[guest['node_id']]['hostname']
ret['data'][i]['html'] = dict()
ret['data'][i]['html']['logo'], ret['data'][i]['html']['os_template_label'] = exchange_guest_os_templates_logo(
os_templates_image_mapping_by_id=os_templates_image_mapping_by_id,
os_templates_profile_mapping_by_id=os_templates_profile_mapping_by_id,
os_template_image_id=guest['os_template_image_id'])
ret['data'][i]['html']['status'] = format_guest_status(_status=guest['status'], progress=guest['progress'])
ret['data'][i]['html']['bandwidth'] = exchange_guest_bandwidth(bandwidth=guest['bandwidth'])
return ret
@Utils.dumps2response
def r_content_search():
ret = guest_base.content_search()
uuids = list()
for guest in ret['data']:
uuids.append(guest['uuid'])
rows, _ = SSHKeyGuestMapping.get_by_filter(filter_str=':'.join(['guest_uuid', 'in', ','.join(uuids)]))
guest_uuid_ssh_key_id_mapping = dict()
ssh_keys_id = list()
for row in rows:
if row['ssh_key_id'] not in ssh_keys_id:
ssh_keys_id.append(row['ssh_key_id'].__str__())
if row['guest_uuid'] not in guest_uuid_ssh_key_id_mapping:
guest_uuid_ssh_key_id_mapping[row['guest_uuid']] = list()
guest_uuid_ssh_key_id_mapping[row['guest_uuid']].append(row['ssh_key_id'])
rows, _ = SSHKey.get_by_filter(filter_str=':'.join(['id', 'in', ','.join(ssh_keys_id)]))
ssh_key_id_mapping = dict()
for row in rows:
row['url'] = url_for('v_ssh_keys.show')
ssh_key_id_mapping[row['id']] = row
rows, _ = Snapshot.get_by_filter(filter_str=':'.join(['guest_uuid', 'in', ','.join(uuids)]))
snapshots_guest_uuid_mapping = dict()
for row in rows:
guest_uuid = row['guest_uuid']
if guest_uuid not in snapshots_guest_uuid_mapping:
snapshots_guest_uuid_mapping[guest_uuid] = list()
snapshots_guest_uuid_mapping[guest_uuid].append(row)
hosts_url = url_for('api_hosts.r_get_by_filter', _external=True)
hosts_ret = requests.get(url=hosts_url, cookies=request.cookies)
hosts_ret = json.loads(hosts_ret.content)
hosts_mapping_by_node_id = dict()
for host in hosts_ret['data']:
hosts_mapping_by_node_id[int(host['node_id'])] = host
os_templates_image, _ = OSTemplateImage.get_by_filter()
os_templates_image_mapping_by_id = dict()
for os_template_image in os_templates_image:
os_templates_image_mapping_by_id[os_template_image['id']] = os_template_image
os_templates_profile, _ = OSTemplateProfile.get_by_filter()
os_templates_profile_mapping_by_id = dict()
for os_template_profile in os_templates_profile:
os_templates_profile_mapping_by_id[os_template_profile['id']] = os_template_profile
for i, guest in enumerate(ret['data']):
guest_uuid = ret['data'][i]['uuid']
if 'ssh_keys' not in ret['data'][i]:
ret['data'][i]['ssh_keys'] = list()
if guest_uuid in guest_uuid_ssh_key_id_mapping:
for ssh_key_id in guest_uuid_ssh_key_id_mapping[guest_uuid]:
if ssh_key_id not in ssh_key_id_mapping:
continue
ret['data'][i]['ssh_keys'].append(ssh_key_id_mapping[ssh_key_id])
if 'snapshot' not in ret['data'][i]:
ret['data'][i]['snapshot'] = {
'creatable': True,
'mapping': list()
}
if guest_uuid in snapshots_guest_uuid_mapping:
ret['data'][i]['snapshot']['mapping'] = snapshots_guest_uuid_mapping[guest_uuid]
for snapshot in snapshots_guest_uuid_mapping[guest_uuid]:
if snapshot['progress'] == 100:
continue
else:
ret['data'][i]['snapshot']['creatable'] = False
if not hosts_mapping_by_node_id[ret['data'][i]['node_id']]['alive']:
ret['data'][i]['status'] = GuestState.no_state.value
ret['data'][i]['hostname'] = hosts_mapping_by_node_id[guest['node_id']]['hostname']
ret['data'][i]['html'] = dict()
ret['data'][i]['html']['logo'], ret['data'][i]['html']['os_template_label'] = exchange_guest_os_templates_logo(
os_templates_image_mapping_by_id=os_templates_image_mapping_by_id,
os_templates_profile_mapping_by_id=os_templates_profile_mapping_by_id,
os_template_image_id=guest['os_template_image_id'])
ret['data'][i]['html']['status'] = format_guest_status(_status=guest['status'], progress=guest['progress'])
ret['data'][i]['html']['bandwidth'] = exchange_guest_bandwidth(bandwidth=guest['bandwidth'])
return ret
@Utils.dumps2response
def r_distribute_count():
from jimvc.models import Guest
rows, count = Guest.get_all()
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
ret['data'] = {
'os_template_image_id': dict(),
'status': dict(),
'node_id': dict(),
'cpu_memory': dict(),
'cpu': 0,
'memory': 0,
'guests': rows.__len__()
}
for guest in rows:
if guest['os_template_image_id'] not in ret['data']['os_template_image_id']:
ret['data']['os_template_image_id'][guest['os_template_image_id']] = 0
if guest['status'] not in ret['data']['status']:
ret['data']['status'][guest['status']] = 0
if guest['node_id'] not in ret['data']['node_id']:
ret['data']['node_id'][guest['node_id']] = 0
cpu_memory = '_'.join([str(guest['cpu']), str(guest['memory'])])
if cpu_memory not in ret['data']['cpu_memory']:
ret['data']['cpu_memory'][cpu_memory] = 0
ret['data']['os_template_image_id'][guest['os_template_image_id']] += 1
ret['data']['status'][guest['status']] += 1
ret['data']['node_id'][guest['node_id']] += 1
ret['data']['cpu_memory'][cpu_memory] += 1
ret['data']['cpu'] += guest['cpu']
ret['data']['memory'] += guest['memory']
return ret
@Utils.dumps2response
def r_update(uuids):
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
ret['data'] = list()
args_rules = [
Rules.UUIDS.value
]
if 'remark' in request.json:
args_rules.append(
Rules.REMARK.value,
)
if args_rules.__len__() < 2:
return ret
request.json['uuids'] = uuids
try:
ji.Check.previewing(args_rules, request.json)
guest = Guest()
# 检测所指定的 UUDIs 实例都存在
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
guest.remark = request.json.get('remark', guest.remark)
guest.update()
guest.get()
ret['data'].append(guest.__dict__)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_revise_ip(uuid, ip):
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
args_rules = [
Rules.UUID.value,
Rules.IP.value
]
try:
ji.Check.previewing(args_rules, {'uuid': uuid, 'ip': ip})
guest = Guest()
guest.uuid = uuid
guest.get_by('uuid')
guest.ip = ip
guest.update()
guest.get()
ret['data'] = guest.__dict__
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_reset_password(uuids, password):
args_rules = [
Rules.UUIDS.value,
Rules.PASSWORD.value
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids, 'password': password})
guest = Guest()
os_template_image = OSTemplateImage()
os_template_profile = OSTemplateProfile()
# 检测所指定的 UUDIs 实例都存在
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
os_template_image.id = guest.os_template_image_id
os_template_image.get()
os_template_profile.id = os_template_image.os_template_profile_id
os_template_profile.get()
user = 'root'
if os_template_profile.os_type == 'windows':
user = 'administrator'
# guest.password 由 guest 事件处理机更新。参见 @models/event_processory.py:189 附近。
message = {
'_object': 'guest',
'action': 'reset_password',
'uuid': guest.uuid,
'node_id': guest.node_id,
'os_type': os_template_profile.os_type,
'user': user,
'password': password,
'passback_parameters': {'password': password}
}
Utils.emit_instruction(message=json.dumps(message, ensure_ascii=False))
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_allocate_bandwidth(uuids, bandwidth, bandwidth_unit):
args_rules = [
Rules.UUIDS.value,
Rules.BANDWIDTH_IN_URL.value,
Rules.BANDWIDTH_UNIT.value,
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids, 'bandwidth': bandwidth, 'bandwidth_unit': bandwidth_unit})
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
bandwidth = int(bandwidth)
if bandwidth_unit == 'k':
bandwidth = bandwidth * 1000
elif bandwidth_unit == 'm':
bandwidth = bandwidth * 1000 ** 2
elif bandwidth_unit == 'g':
bandwidth = bandwidth * 1000 ** 3
else:
ret['state'] = ji.Common.exchange_state(41203)
return ret
# http://man7.org/linux/man-pages/man8/tc.8.html
# 如果带宽大于 tc 所控最大速率,则置其为无限带宽
# 34359738360 等于 tc 最大可控字节速率,换算出的比特位
if bandwidth > 34359738360:
bandwidth = 0
guest = Guest()
# 检测所指定的 UUDIs 实例都存在
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
guest.bandwidth = bandwidth
message = {
'_object': 'guest',
'action': 'allocate_bandwidth',
'uuid': guest.uuid,
'node_id': guest.node_id,
'bandwidth': guest.bandwidth,
'passback_parameters': {'bandwidth': guest.bandwidth}
}
Utils.emit_instruction(message=json.dumps(message, ensure_ascii=False))
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_adjust_ability(uuids, cpu, memory):
args_rules = [
Rules.UUIDS.value,
Rules.CPU.value,
Rules.MEMORY.value,
]
try:
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
cpu = int(cpu)
memory = int(memory)
ji.Check.previewing(args_rules, {'uuids': uuids, 'cpu': cpu, 'memory': memory})
not_ready_yet_of_guests = list()
guest = Guest()
# 检测所指定的 UUDIs 实例都存在。且状态都为可以操作状态(即关闭状态)。
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
if guest.status != status.GuestState.shutoff.value:
not_ready_yet_of_guests.append(guest.__dict__)
if not_ready_yet_of_guests.__len__() > 0:
ret['state'] = ji.Common.exchange_state(41261)
ret['data'] = not_ready_yet_of_guests
return ret
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
guest.cpu = cpu
guest.memory = memory
message = {
'_object': 'guest',
'action': 'adjust_ability',
'uuid': guest.uuid,
'node_id': guest.node_id,
'cpu': guest.cpu,
'memory': guest.memory,
'passback_parameters': {'cpu': guest.cpu, 'memory': guest.memory}
}
Utils.emit_instruction(message=json.dumps(message, ensure_ascii=False))
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_change_prepared_by(uuids, service_id):
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
ret['data'] = list()
args_rules = [
Rules.UUIDS.value,
Rules.SERVICE_ID_IN_URL.value
]
try:
ji.Check.previewing(args_rules, {'uuids': uuids, 'service_id': service_id})
guest = Guest()
# 检测所指定的 UUDIs 实例都存在
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
for uuid in uuids.split(','):
guest.uuid = uuid
guest.get_by('uuid')
guest.service_id = int(service_id)
guest.update()
guest.get()
ret['data'].append(guest.__dict__)
return ret
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_refresh_guest_state():
try:
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
# 取全部活着的 hosts
available_hosts = Host.get_available_hosts(nonrandom=None)
if available_hosts.__len__() == 0:
ret['state'] = ji.Common.exchange_state(50351)
return ret
for host in available_hosts:
message = {
'_object': 'global',
'action': 'refresh_guest_state',
'node_id': host['node_id']
}
Utils.emit_instruction(message=json.dumps(message, ensure_ascii=False))
except ji.PreviewingError, e:
return json.loads(e.message)
@Utils.dumps2response
def r_show():
args = list()
page = int(request.args.get('page', 1))
page_size = int(request.args.get('page_size', 20))
keyword = request.args.get('keyword', None)
if page is not None:
args.append('page=' + page.__str__())
if page_size is not None:
args.append('page_size=' + page_size.__str__())
if keyword is not None:
args.append('keyword=' + keyword.__str__())
hosts_url = url_for('api_hosts.r_get_by_filter', _external=True)
guests_url = url_for('api_guests.r_get_by_filter', _external=True)
if keyword is not None:
guests_url = url_for('api_guests.r_content_search', _external=True)
if args.__len__() > 0:
guests_url = guests_url + '?' + '&'.join(args)
hosts_ret = requests.get(url=hosts_url, cookies=request.cookies)
hosts_ret = json.loads(hosts_ret.content)
hosts_mapping_by_node_id = dict()
for host in hosts_ret['data']:
hosts_mapping_by_node_id[int(host['node_id'])] = host
guests_ret = requests.get(url=guests_url, cookies=request.cookies)
guests_ret = json.loads(guests_ret.content)
os_templates_image, _ = OSTemplateImage.get_by_filter()
os_templates_image_mapping_by_id = dict()
for os_template_image in os_templates_image:
os_templates_image_mapping_by_id[os_template_image['id']] = os_template_image
os_templates_profile, _ = OSTemplateProfile.get_by_filter()
os_templates_profile_mapping_by_id = dict()
for os_template_profile in os_templates_profile:
os_templates_profile_mapping_by_id[os_template_profile['id']] = os_template_profile
last_page = int(ceil(guests_ret['paging']['total'] / float(page_size)))
page_length = 5
pages = list()
if page < int(ceil(page_length / 2.0)):
for i in range(1, page_length + 1):
pages.append(i)
if i == last_page or last_page == 0:
break
elif last_page - page < page_length / 2:
for i in range(last_page - page_length + 1, last_page + 1):
if i < 1:
continue
pages.append(i)
else:
for i in range(page - page_length / 2, page + int(ceil(page_length / 2.0))):
pages.append(i)
if i == last_page or last_page == 0:
break
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
ret['data'] = {
'guests': guests_ret['data'],
'os_templates_image_mapping_by_id': os_templates_image_mapping_by_id,
'os_templates_profile_mapping_by_id': os_templates_profile_mapping_by_id,
'hosts_mapping_by_node_id': hosts_mapping_by_node_id,
'paging': guests_ret['paging'],
'page': page,
'page_size': page_size,
'keyword': keyword,
'pages': pages,
'last_page': last_page
}
return ret
@Utils.dumps2response
def r_vnc(uuid):
guest_ret = guest_base.get(ids=uuid, ids_rule=Rules.UUID.value, by_field='uuid')
if '200' != guest_ret['state']['code']:
return guest_ret
hosts_url = url_for('api_hosts.r_get_by_filter', _external=True)
hosts_ret = requests.get(url=hosts_url, cookies=request.cookies)
hosts_ret = json.loads(hosts_ret.content)
hosts_mapping_by_node_id = dict()
for host in hosts_ret['data']:
hosts_mapping_by_node_id[int(host['node_id'])] = host
port = random.randrange(50000, 60000)
while True:
if not Utils.port_is_opened(port=port):
break
port = random.randrange(50000, 60000)
payload = {'listen_port': port, 'target_host': hosts_mapping_by_node_id[guest_ret['data']['node_id']]['hostname'],
'target_port': guest_ret['data']['vnc_port']}
db.r.rpush(app_config['ipc_queue'], json.dumps(payload, ensure_ascii=False))
time.sleep(1)
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
ret['data'] = {
'port': port,
'vnc_password': guest_ret['data']['vnc_password']
}
return ret
@Utils.dumps2response
def r_detail(uuid):
hosts_url = url_for('api_hosts.r_get_by_filter', _external=True)
hosts_ret = requests.get(url=hosts_url, cookies=request.cookies)
hosts_ret = json.loads(hosts_ret.content)
hosts_mapping_by_node_id = dict()
for host in hosts_ret['data']:
hosts_mapping_by_node_id[int(host['node_id'])] = host
guest = Guest()
guest.uuid = uuid
guest.get_by(field='uuid')
guest.ssh_keys = list()
rows, _ = SSHKeyGuestMapping.get_by_filter(filter_str=':'.join(['guest_uuid', 'in', guest.uuid]))
ssh_keys_id = list()
for row in rows:
if row['ssh_key_id'] not in ssh_keys_id:
ssh_keys_id.append(row['ssh_key_id'].__str__())
rows, _ = SSHKey.get_by_filter(filter_str=':'.join(['id', 'in', ','.join(ssh_keys_id)]))
for row in rows:
row['url'] = url_for('v_ssh_keys.show')
if row['id'].__str__() not in ssh_keys_id:
continue
guest.ssh_keys.append(row)
os_template_image = OSTemplateImage()
os_template_image.id = guest.os_template_image_id.__str__()
os_template_image.get()
os_template_profiles, _ = OSTemplateProfile.get_by_filter()
os_templates_profile_mapping_by_id = dict()
for os_template_profile in os_template_profiles:
os_templates_profile_mapping_by_id[os_template_profile['id']] = os_template_profile
disks_url = url_for('api_disks.r_get_by_filter', filter='guest_uuid:in:' + guest.uuid, _external=True)
disks_ret = requests.get(url=disks_url, cookies=request.cookies)
disks = json.loads(disks_ret.content)['data']
if not hosts_mapping_by_node_id[guest.node_id]['alive']:
guest.status = GuestState.no_state.value
config = Config()
config.id = 1
config.get()
ret = dict()
ret['state'] = ji.Common.exchange_state(20000)
ret['data'] = {
'uuid': uuid,
'guest': guest.__dict__,
'os_template_image': os_template_image.__dict__,
'os_templates_profile_mapping_by_id': os_templates_profile_mapping_by_id,
'hosts_mapping_by_node_id': hosts_mapping_by_node_id,
'disks': disks,
'config': config.__dict__
}
return ret
|
Posted at 11:08 am on June 8, 2018 by Sam J.
Hey, don’t take it from us.
Federal investigators probing leaks of classified information to journalists secretly obtained phone and email records of New York Times reporter Ali Watkins, a Temple University graduate who once interned with the Daily News, the Times reported Thursday night.
The Times said Watkins told the newspaper about the relationship when it hired her and also told Buzzfeed and Politico editors when she worked with them.
The Times did not specify what leaks were being investigated, but cited Attorney General Jeff Sessions’ vow to crack down on leakers, and noted that similar tactics were pursued under President Barack Obama.
“Freedom of the press is a cornerstone of democracy, and communications between journalists and their sources demand protection,” said Eileen Murphy, a Times spokeswoman.
A Justice Department spokeswoman declined to comment to the Times.
Ya’ think? EL OH EL.
And if THAT isn’t bad enough? Check out this tweet from 2013 … we can’t MAKE this up.
That's one way to acquire a source.
And to think, this site is still FREE.
|
from warcio.archiveiterator import WARCIterator
import json
import argparse
import logging
import sys
import os
from collections import namedtuple
log = logging.getLogger(__name__)
IterItem = namedtuple('IterItem', ['type', 'id', 'date', 'url', 'item'])
class BaseWarcIter:
"""
Base class for a warc iterator. A warc iterator iterates over the social media
items recorded in a WARC file.
This supports payloads which are json or line-oriented json.
Subclasses should overrride _select_record(), _item_iter(), item_types, and
possibly line_oriented.
"""
def __init__(self, filepaths):
if isinstance(filepaths, str):
self.filepaths = (filepaths,)
else:
self.filepaths = filepaths
def __iter__(self):
return self.iter()
@staticmethod
def _debug_counts(filename, record_count, yield_count, by_record_count=True):
should_debug = False
if by_record_count and record_count <= 100 and record_count % 10 == 0:
should_debug = True
elif by_record_count and 100 < record_count and record_count % 100 == 0:
should_debug = True
elif not by_record_count and yield_count <= 1000 and yield_count % 100 == 0:
should_debug = True
elif not by_record_count and 1000 < yield_count and yield_count % 1000 == 0:
should_debug = True
if should_debug:
log.debug("File %s. Processed %s records. Yielded %s items.", filename, record_count, yield_count)
def iter(self, limit_item_types=None, dedupe=False, item_date_start=None, item_date_end=None):
"""
:return: Iterator returning IterItems.
"""
seen_ids = {}
for filepath in self.filepaths:
log.info("Iterating over %s", filepath)
filename = os.path.basename(filepath)
with open(filepath, 'rb') as f:
yield_count = 0
for record_count, record in enumerate((r for r in WARCIterator(f) if r.rec_type == 'response')):
self._debug_counts(filename, record_count, yield_count, by_record_count=True)
record_url = record.rec_headers.get_header('WARC-Target-URI')
record_id = record.rec_headers.get_header('WARC-Record-ID')
if self._select_record(record_url):
stream = record.content_stream()
line = stream.readline().decode('utf-8')
while line:
json_obj = None
try:
if line != "\r\n":
# A non-line-oriented payload only has one payload part.
json_obj = json.loads(line)
except ValueError:
log.warning("Bad json in record %s: %s", record_id, line)
if json_obj:
for item_type, item_id, item_date, item in self._item_iter(record_url, json_obj):
# None for item_type indicates that the type is not handled. OK to ignore.
if item_type is not None:
yield_item = True
if limit_item_types and item_type not in limit_item_types:
yield_item = False
if item_date_start and item_date and item_date < item_date_start:
yield_item = False
if item_date_end and item_date and item_date > item_date_end:
yield_item = False
if not self._select_item(item):
yield_item = False
if dedupe and yield_item:
if item_id in seen_ids:
yield_item = False
else:
seen_ids[item_id] = True
if yield_item:
if item is not None:
yield_count += 1
self._debug_counts(filename, record_count, yield_count,
by_record_count=False)
yield IterItem(item_type, item_id, item_date, record_url, item)
else:
log.warn("Bad response in record %s", record_id)
line = stream.readline().decode('utf-8')
def _select_record(self, url):
"""
Return True to process this record. This allows a WarcIter to only process
records for the type of social media content that it handles.
"""
pass
def _select_item(self, item):
"""
Return True to select this item. This allows a WarcIter to filter items.
"""
return True
def print_iter(self, pretty=False, fp=sys.stdout, limit_item_types=None, print_item_type=False, dedupe=False):
for item_type, _, _, _, item in self.iter(limit_item_types=limit_item_types, dedupe=dedupe):
if print_item_type:
fp.write("{}:".format(item_type))
json.dump(item, fp, indent=4 if pretty else None)
fp.write("\n")
def _item_iter(self, url, json_obj):
"""
Returns an iterator over the social media item types and items (as JSON objects).
:returns item_type, item_id, item_date, item iterator
"""
pass
@staticmethod
def item_types():
"""
Returns a list of item types that are handled by this WarcIter.
"""
pass
@property
def line_oriented(self):
"""
Indicates whether the payload should be handled as line-oriented.
Subclasses that support line-oriented payloads should return True.
"""
return False
@staticmethod
def main(cls):
# Logging
logging.basicConfig(format='%(asctime)s: %(name)s --> %(message)s', level=logging.DEBUG)
parser = argparse.ArgumentParser()
item_types = cls.item_types()
if len(item_types) > 1:
parser.add_argument("--item-types",
help="A comma separated list of item types to limit the results. "
"Item types are {}".format(", ".join(item_types)))
parser.add_argument("--pretty", action="store_true", help="Format the json for viewing.")
parser.add_argument("--dedupe", action="store_true", help="Remove duplicate items.")
parser.add_argument("--print-item-type", action="store_true", help="Print the item type.")
parser.add_argument("--debug", type=lambda v: v.lower() in ("yes", "true", "t", "1"), nargs="?",
default="False", const="True")
parser.add_argument("filepaths", nargs="+", help="Filepath of the warc.")
args = parser.parse_args()
# Logging
logging.getLogger().setLevel(logging.DEBUG if args.debug else logging.INFO)
main_limit_item_types = args.item_types.split(",") if "item_types" in vars(args) else None
cls(args.filepaths).print_iter(limit_item_types=main_limit_item_types, pretty=args.pretty,
print_item_type=args.print_item_type, dedupe=args.dedupe)
|
Companies' software updates help manage demand, supply planning.
Software upgrades from Prescient Systems Inc. and PeopleSoft Inc. make strides toward bringing the demand planning and execution components of SCM closer.
By integrating these two phases of the production process, businesses and manufacturers will be better able to understand the big picture when budgeting and responding to changes in demand for their products.
Prescient, of West Chester, Pa., will ship this month Version 5.0 of its namesake supply chain management application suite, formerly called XEi. Dynamic alerts within Version 5.0s performance measurement application enable the software to push active and dynamic reports to users. New constraint-based planning capabilities factor in material availability and equipment limitations for more realistic forecasts.
The upgrade also offers the ability to manage integrated demand and supply planning in both distribution and manufacturing organizations. With Prescient 5.0, production planners can view plans by resources and by products, at item and group levels, graphically, or via tables and grids. Planners can see full details of the forecast and replenishment plans and can work with multiple plans, officials said.
Version 5.0 further strengthens the link between planning and execution for makers of consumer goods by providing a link into Wal-Mart Stores Inc.s Retail Link electronic collaboration Web site. It also provides an integration tool that allows manufacturers to collaborate with retailers.
"Right now, we have Cognos [Inc.] Impromptu [analysis] tools sitting over an Oracle [Corp.] database, so when we want to review any reporting, we have to go to another tool," said Don Juliano, director of demand management at AAi.FosterGrant Inc., in Smithfield, R.I. "With Prescient 5.0, we could immediately turn around some reports."
Separately, PeopleSoft is reworking its production planning module to encompass a broader supply chain planning offering. The Pleasanton, Calif., company will release in the second quarter of next year a new version of its Supply Chain Planning suite that emphasizes linkages to business planning and budgeting and linking sales and finance to the supply chain.
Version 2.0 of PeopleSofts Strategic Sourcing module, also due in the second quarter, will create a closer link between supply chain planning and strategic sourcing. This will enable a planner, an engineer and a buyer, for example, to collaborate and participate on a request-for-quote bid, as well as enable the supplier to provide specific information in the process, officials said.
"It is absolutely essential that we [in business] pull it all together," said PeopleSoft user Jim Prevo, CIO at Green Mountain Coffee Roasters Inc., in Waterbury, Vt. "The advantage of having integrated [that information] is those data models can be understood," Prevo said. "PeopleSoft is on that path, [although] they arent there by any means."
|
# Protocol Buffers - Google's data interchange format
# Copyright 2008 Google Inc. All rights reserved.
# http://code.google.com/p/protobuf/
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are
# met:
#
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above
# copyright notice, this list of conditions and the following disclaimer
# in the documentation and/or other materials provided with the
# distribution.
# * Neither the name of Google Inc. nor the names of its
# contributors may be used to endorse or promote products derived from
# this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
"""Constants and static functions to support protocol buffer wire format."""
__author__ = 'robinson@google.com (Will Robinson)'
import struct
from google.protobuf import descriptor
from google.protobuf import message
TAG_TYPE_BITS = 3 # Number of bits used to hold type info in a proto tag.
TAG_TYPE_MASK = (1 << TAG_TYPE_BITS) - 1 # 0x7
# These numbers identify the wire type of a protocol buffer value.
# We use the least-significant TAG_TYPE_BITS bits of the varint-encoded
# tag-and-type to store one of these WIRETYPE_* constants.
# These values must match WireType enum in google/protobuf/wire_format.h.
WIRETYPE_VARINT = 0
WIRETYPE_FIXED64 = 1
WIRETYPE_LENGTH_DELIMITED = 2
WIRETYPE_START_GROUP = 3
WIRETYPE_END_GROUP = 4
WIRETYPE_FIXED32 = 5
_WIRETYPE_MAX = 5
# Bounds for various integer types.
INT32_MAX = int((1 << 31) - 1)
INT32_MIN = int(-(1 << 31))
UINT32_MAX = (1 << 32) - 1
INT64_MAX = (1 << 63) - 1
INT64_MIN = -(1 << 63)
UINT64_MAX = (1 << 64) - 1
# "struct" format strings that will encode/decode the specified formats.
FORMAT_UINT32_LITTLE_ENDIAN = '<I'
FORMAT_UINT64_LITTLE_ENDIAN = '<Q'
FORMAT_FLOAT_LITTLE_ENDIAN = '<f'
FORMAT_DOUBLE_LITTLE_ENDIAN = '<d'
# We'll have to provide alternate implementations of AppendLittleEndian*() on
# any architectures where these checks fail.
if struct.calcsize(FORMAT_UINT32_LITTLE_ENDIAN) != 4:
raise AssertionError('Format "I" is not a 32-bit number.')
if struct.calcsize(FORMAT_UINT64_LITTLE_ENDIAN) != 8:
raise AssertionError('Format "Q" is not a 64-bit number.')
def PackTag(field_number, wire_type):
"""Returns an unsigned 32-bit integer that encodes the field number and
wire type information in standard protocol message wire format.
Args:
field_number: Expected to be an integer in the range [1, 1 << 29)
wire_type: One of the WIRETYPE_* constants.
"""
if not 0 <= wire_type <= _WIRETYPE_MAX:
raise message.EncodeError('Unknown wire type: %d' % wire_type)
return (field_number << TAG_TYPE_BITS) | wire_type
def UnpackTag(tag):
"""The inverse of PackTag(). Given an unsigned 32-bit number,
returns a (field_number, wire_type) tuple.
"""
return (tag >> TAG_TYPE_BITS), (tag & TAG_TYPE_MASK)
def ZigZagEncode(value):
"""ZigZag Transform: Encodes signed integers so that they can be
effectively used with varint encoding. See wire_format.h for
more details.
"""
if value >= 0:
return value << 1
return (value << 1) ^ (~0)
def ZigZagDecode(value):
"""Inverse of ZigZagEncode()."""
if not value & 0x1:
return value >> 1
return (value >> 1) ^ (~0)
# The *ByteSize() functions below return the number of bytes required to
# serialize "field number + type" information and then serialize the value.
def Int32ByteSize(field_number, int32):
return Int64ByteSize(field_number, int32)
def Int32ByteSizeNoTag(int32):
return _VarUInt64ByteSizeNoTag(0xffffffffffffffff & int32)
def Int64ByteSize(field_number, int64):
# Have to convert to uint before calling UInt64ByteSize().
return UInt64ByteSize(field_number, 0xffffffffffffffff & int64)
def UInt32ByteSize(field_number, uint32):
return UInt64ByteSize(field_number, uint32)
def UInt64ByteSize(field_number, uint64):
return TagByteSize(field_number) + _VarUInt64ByteSizeNoTag(uint64)
def SInt32ByteSize(field_number, int32):
return UInt32ByteSize(field_number, ZigZagEncode(int32))
def SInt64ByteSize(field_number, int64):
return UInt64ByteSize(field_number, ZigZagEncode(int64))
def Fixed32ByteSize(field_number, fixed32):
return TagByteSize(field_number) + 4
def Fixed64ByteSize(field_number, fixed64):
return TagByteSize(field_number) + 8
def SFixed32ByteSize(field_number, sfixed32):
return TagByteSize(field_number) + 4
def SFixed64ByteSize(field_number, sfixed64):
return TagByteSize(field_number) + 8
def FloatByteSize(field_number, flt):
return TagByteSize(field_number) + 4
def DoubleByteSize(field_number, double):
return TagByteSize(field_number) + 8
def BoolByteSize(field_number, b):
return TagByteSize(field_number) + 1
def EnumByteSize(field_number, enum):
return UInt32ByteSize(field_number, enum)
def StringByteSize(field_number, string):
return BytesByteSize(field_number, string.encode('utf-8'))
def BytesByteSize(field_number, b):
return (TagByteSize(field_number)
+ _VarUInt64ByteSizeNoTag(len(b))
+ len(b))
def GroupByteSize(field_number, message):
return (2 * TagByteSize(field_number) # START and END group.
+ message.ByteSize())
def MessageByteSize(field_number, message):
return (TagByteSize(field_number)
+ _VarUInt64ByteSizeNoTag(message.ByteSize())
+ message.ByteSize())
def MessageSetItemByteSize(field_number, msg):
# First compute the sizes of the tags.
# There are 2 tags for the beginning and ending of the repeated group, that
# is field number 1, one with field number 2 (type_id) and one with field
# number 3 (message).
total_size = (2 * TagByteSize(1) + TagByteSize(2) + TagByteSize(3))
# Add the number of bytes for type_id.
total_size += _VarUInt64ByteSizeNoTag(field_number)
message_size = msg.ByteSize()
# The number of bytes for encoding the length of the message.
total_size += _VarUInt64ByteSizeNoTag(message_size)
# The size of the message.
total_size += message_size
return total_size
def TagByteSize(field_number):
"""Returns the bytes required to serialize a tag with this field number."""
# Just pass in type 0, since the type won't affect the tag+type size.
return _VarUInt64ByteSizeNoTag(PackTag(field_number, 0))
# Private helper function for the *ByteSize() functions above.
def _VarUInt64ByteSizeNoTag(uint64):
"""Returns the number of bytes required to serialize a single varint
using boundary value comparisons. (unrolled loop optimization -WPierce)
uint64 must be unsigned.
"""
if uint64 <= 0x7f: return 1
if uint64 <= 0x3fff: return 2
if uint64 <= 0x1fffff: return 3
if uint64 <= 0xfffffff: return 4
if uint64 <= 0x7ffffffff: return 5
if uint64 <= 0x3ffffffffff: return 6
if uint64 <= 0x1ffffffffffff: return 7
if uint64 <= 0xffffffffffffff: return 8
if uint64 <= 0x7fffffffffffffff: return 9
if uint64 > UINT64_MAX:
raise message.EncodeError('Value out of range: %d' % uint64)
return 10
NON_PACKABLE_TYPES = (
descriptor.FieldDescriptor.TYPE_STRING,
descriptor.FieldDescriptor.TYPE_GROUP,
descriptor.FieldDescriptor.TYPE_MESSAGE,
descriptor.FieldDescriptor.TYPE_BYTES
)
def IsTypePackable(field_type):
"""Return true iff packable = true is valid for fields of this type.
Args:
field_type: a FieldDescriptor::Type value.
Returns:
True iff fields of this type are packable.
"""
return field_type not in NON_PACKABLE_TYPES
|
It was announced back in July that BtoBet, a top online casino and sportsbook platform developer, is set to be the Digital Sponsor of the upcoming Central and Eastern European Gaming Conference (CEEGC). Now, we have learned that BtoBet has been shortlisted for three awards at the event, set to take place September 25th at the Ritz-Carlton Budapest.
The developer has been nominated for awards in three different categories of the upcoming CEEGC. The company is being considered for the Best Sports Betting Innovation award, the Best Overall Sports Betting Provider and the Rising Star in Sports Betting Technology awards.
The Chief Marketing Officer of BtoBet, Sabrina Solda, commented that throughout the years, the company has established themselves as a leader when it comes to providing new technologies for online gaming operators. Being shortlisted in the three categories helps to demonstrate the validity and level of excellence of the online gaming platforms of BtoBet.
The CEEGC Awards celebrate the many facets of the gaming industry including service providers, operators and software suppliers. The nominees in each category have been shortlisted due to their outstanding contributions to the online sector based on their respective fields.
BtoBet will have to wait just a few more weeks to see if they will be a recipient of one of the prestigious CEEGC awards. The multi-national company is part of a group with two decades of experience in software development in several categories including telecommunication, IT, finance, banking and e-commerce.
The experience that BtoBet has gained as part of these environments allows the brand to be a visionary within the sports betting and online gaming industry, having a strong understanding of emerging trends as well as anticipating the needs of operators and bookmakers.
|
#api.py
#Routines for dumping the MIB API region of a mib12 executive module and verifying
#the contents to make sure they have not been stomped on by some other process.
from pymomo.hex8.decode import *
from pymomo.utilities.paths import MomoPaths
from pymomo.utilities import build
from config12 import MIB12Processor
from pymomo.utilities import intelhex
class MIBAPI:
def __init__(self, hexfile, chip):
with open(hexfile, "r"):
self.hf = intelhex.IntelHex16bit(hexfile)
proc = MIB12Processor.FromChip(chip)
self.api_base = proc.api_range[0]
self.valid = self.verify_api()
def verify_api(self):
"""
Verify that all instructions in the MIB api region are either retlw 0
or goto.
"""
for i in xrange(0, 16):
try:
val = decode_retlw(self.hf, self.api_base + i)
if val == 0:
continue
return False
except:
pass
try:
decode_goto(self.hf, self.api_base + i)
continue
except:
pass
return False
return True
def print_api(self):
print "MIB API Block"
print "Valid:", self.valid
print "\nTable Contents Follow"
for i in xrange(0, 16):
try:
val = decode_retlw(self.hf, self.api_base + i)
print "%d: retlw 0x%x" % (i, val)
continue
except:
pass
try:
addr = decode_goto(self.hf, self.api_base + i)
print "%d: goto 0x%x" % (i, addr)
continue
except:
pass
print "%d: Invalid Instruction (0x%x)" % (i, self.hf[self.api_base + i])
|
Mortified after her semester abroad is cut short, Amelia Christiansen returns to Deep Haven, certain she isn't brave enough for the adventures she's dreamed of. The last thing she expects is for the man who broke her heart to cross the Atlantic and beg forgiveness.Heir to a European hotel dynasty, Roark St. John has trekked from one exotic locale to another, haunted by tragedy and the expectations that accompany his last name. Amelia is the first woman to give him a reason to stop running. He'll do anything for a second chance--even contend with Amelia's old flame, who is intent on sending Roark packing.While one surprise after another leaves Amelia reeling, Roark's continued presence only highlights the questions pursuing her. Like him, is she running from the life God has called her to? Could finding her new place mean leaving home behind?
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.