text
stringlengths 29
850k
|
|---|
#!/usr/bin/env python
# -*- coding: ascii -*-
"""defines.py - this file contains meta data and program parameters"""
# Meta data
__author__ = "Nicolai Spicher"
__credits__ = ["Nicolai Spicher", "Stefan Maderwald", "Markus Kukuk", "Mark E. Ladd"]
__license__ = "GPL v3"
__version__ = "v0.2-beta"
__maintainer__ = "Nicolai Spicher"
__email__ = "nicolai[dot]spicher[at]fh-dortmund[dot]de"
__status__ = "Beta"
__url__ = "https://github.com/nspi/vbcg"
__description__ = "real-time application for video-based estimation of the hearts activity"
# Indices of program settings
IDX_WEBCAM = 0
IDX_CAMERA = 1
IDX_ALGORITHM = 2
IDX_CURVES = 3
IDX_FRAMES = 4
IDX_FACE = 5
IDX_FPS = 6
IDX_COLORCHANNEL = 7
# Indices of algorithm parameters
IDX_ZERO_PADDING = 0
IDX_WIN_SIZE = 1
IDX_RUN_MAX = 2
IDX_MIN_TIME = 3
# Standard values of program settings
VAL_WEBCAM = 1
VAL_CAMERA = 1
VAL_ALGORITHM = 0
VAL_CURVES = 1
VAL_FRAMES = 0
VAL_FACE = 0
VAL_FPS = 25
VAL_COLORCHANNEL = 1
# Standard values of algorithm parameters
VAL_ZERO_PADDING = 1
VAL_WIN_SIZE = 9
VAL_RUN_MAX = 3
VAL_MIN_TIME = 0.5
# Labels of algorithms in GUI
LABEL_ALGORITHM_1 = "Estimate HR (BMT 2015)"
LABEL_ALGORITHM_2 = "Filter signal (ISMRM 2016)"
LABEL_ALGORITHM_3 = "Trigger MRI (ISMRM 2015)"
|
Click the Barbie Doll link above to take you to all the Mod Barbie Dolls in the vintage line!
The term “mod” is short for “modern” which was exactly what these girls were back in 1967 when the first mod Barbie was introduced. She was named the New Barbie twist n’ turn doll. To present the new line, Mattel had a “trade in” promotion, which was just trading in an old doll along with $1.50, and you would receive a brand new twist ‘n turn Barbie. The old dolls were given to Charity. The twist n' turn doll had long straight hair, new make-up, a new head mold, and a brand new body! These dolls and her fashions were inspired by the youth of America as well as the styles of Great Britain. As we all know, some of the best music came from there and of course within the music world, fashion always follows! Barbie now looked younger and more hip! Pop culture was now a huge part of Barbie! Between 1967 and 1972 there were many Barbies released. Click the Barbie Doll link above, to see all the different "Mod" Barbie dolls in Mattel's line.
|
# coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class ElasticPoolPerDatabaseMinDtuCapability(Model):
"""The minimum per-database DTU capability.
Variables are only populated by the server, and will be ignored when
sending a request.
:ivar limit: The maximum DTUs per database.
:vartype limit: long
:ivar status: The status of the capability. Possible values include:
'Visible', 'Available', 'Default', 'Disabled'
:vartype status: str or :class:`CapabilityStatus
<azure.mgmt.sql.models.CapabilityStatus>`
"""
_validation = {
'limit': {'readonly': True},
'status': {'readonly': True},
}
_attribute_map = {
'limit': {'key': 'limit', 'type': 'long'},
'status': {'key': 'status', 'type': 'CapabilityStatus'},
}
def __init__(self):
self.limit = None
self.status = None
|
Click here now to conduct a fast scan for Ways A Computer Can Crash and connected problems.
Ways A Computer Can Crash error message codes tend to be caused in one way or another by misconfigured files in the Windows OS.
If you experience Ways A Computer Can Crash error code then we strongly recommend that you perform an error message scan.
This article provides related information that demonstrates to you how you can resolve any MS windows Ways A Computer Can Crash error messages both manually as well as automatically. Also, this document can help you to troubleshoot most typical error messages connected with Ways A Computer Can Crash error you could be sent.
1. What is Ways A Computer Can Crash error message?
2. What can cause an Ways A Computer Can Crash error message?
3. How to simply solve Ways A Computer Can Crash error code.
What is Ways A Computer Can Crash error message?
A Ways A Computer Can Crash error is the number and letter data format of the error message caused. It’s the typical error format utilized by Windows in addition to other Windows compatible software and device driver manufacturers.
This code is employed by the supplier to diagnose the error code made. This Ways A Computer Can Crash error code gives you a numeric value as well as a technical explanation. In some circumstances the error code could possibly have more parameters in Ways A Computer Can Crash format .The additional hexadecimal code are the location of your memory locations in which the instructions are loaded during the time of the error code.
The cause of Ways A Computer Can Crash error code?
A Ways A Computer Can Crash error code is the result of Microsoft Windows system corruption. Damaged system data files are often a substantial danger to the performance of your pc.
There can be lots of events that will trigger file errors. An incomplete install, an unfinished file erasure, bad erasure of software applications or devices. It could also be caused in the event your machine has caught a a computer virus or spyware infection or through an improper shutdown of the computer or laptop. Many of the above activities could very well result in the removal or corruption of Windows system data files. This damaged system file will give you absent and erroneously linked documents and records essential for the correct working of the program.
Simple methods to simply repair Ways A Computer Can Crash error message?
3) From the next window, click “Restore my machine to a previous date” and then click on Next.
6) Restart the computer or laptop once the rescue is finished.
1) Download the (Ways A Computer Can Crash) fix utility.
Here is a link to an alternative Ways A Computer Can Crash fix tool you may try if the previous program doesn’t work.
|
#!/usr/bin/env python
# Copyright 2018 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This example illustrates how to add a campaign.
To get campaigns, run get_campaigns.py.
"""
import argparse
import datetime
import sys
import uuid
from google.ads.googleads.client import GoogleAdsClient
from google.ads.googleads.errors import GoogleAdsException
_DATE_FORMAT = "%Y%m%d"
def main(client, customer_id):
campaign_budget_service = client.get_service("CampaignBudgetService")
campaign_service = client.get_service("CampaignService")
# [START add_campaigns]
# Create a budget, which can be shared by multiple campaigns.
campaign_budget_operation = client.get_type("CampaignBudgetOperation")
campaign_budget = campaign_budget_operation.create
campaign_budget.name = f"Interplanetary Budget {uuid.uuid4()}"
campaign_budget.delivery_method = (
client.enums.BudgetDeliveryMethodEnum.STANDARD
)
campaign_budget.amount_micros = 500000
# Add budget.
try:
campaign_budget_response = (
campaign_budget_service.mutate_campaign_budgets(
customer_id=customer_id, operations=[campaign_budget_operation]
)
)
except GoogleAdsException as ex:
_handle_googleads_exception(ex)
# [END add_campaigns]
# [START add_campaigns_1]
# Create campaign.
campaign_operation = client.get_type("CampaignOperation")
campaign = campaign_operation.create
campaign.name = f"Interplanetary Cruise {uuid.uuid4()}"
campaign.advertising_channel_type = (
client.enums.AdvertisingChannelTypeEnum.SEARCH
)
# Recommendation: Set the campaign to PAUSED when creating it to prevent
# the ads from immediately serving. Set to ENABLED once you've added
# targeting and the ads are ready to serve.
campaign.status = client.enums.CampaignStatusEnum.PAUSED
# Set the bidding strategy and budget.
campaign.manual_cpc.enhanced_cpc_enabled = True
campaign.campaign_budget = campaign_budget_response.results[0].resource_name
# Set the campaign network options.
campaign.network_settings.target_google_search = True
campaign.network_settings.target_search_network = True
campaign.network_settings.target_content_network = False
campaign.network_settings.target_partner_search_network = False
# [END add_campaigns_1]
# Optional: Set the start date.
start_time = datetime.date.today() + datetime.timedelta(days=1)
campaign.start_date = datetime.date.strftime(start_time, _DATE_FORMAT)
# Optional: Set the end date.
end_time = start_time + datetime.timedelta(weeks=4)
campaign.end_date = datetime.date.strftime(end_time, _DATE_FORMAT)
# Add the campaign.
try:
campaign_response = campaign_service.mutate_campaigns(
customer_id=customer_id, operations=[campaign_operation]
)
print(f"Created campaign {campaign_response.results[0].resource_name}.")
except GoogleAdsException as ex:
_handle_googleads_exception(ex)
def _handle_googleads_exception(exception):
print(
f'Request with ID "{exception.request_id}" failed with status '
f'"{exception.error.code().name}" and includes the following errors:'
)
for error in exception.failure.errors:
print(f'\tError with message "{error.message}".')
if error.location:
for field_path_element in error.location.field_path_elements:
print(f"\t\tOn field: {field_path_element.field_name}")
sys.exit(1)
if __name__ == "__main__":
# GoogleAdsClient will read the google-ads.yaml configuration file in the
# home directory if none is specified.
googleads_client = GoogleAdsClient.load_from_storage(version="v8")
parser = argparse.ArgumentParser(
description="Adds a campaign for specified customer."
)
# The following argument(s) should be provided to run the example.
parser.add_argument(
"-c",
"--customer_id",
type=str,
required=True,
help="The Google Ads customer ID.",
)
args = parser.parse_args()
main(googleads_client, args.customer_id)
|
The markets don’t believe an additional two interest rate hikes will happen this year, according to CNBC. Subway said it shuttered 359 locations in 2016, Reuters reports. These are among today’s must reads from around the commercial real estate industry.
|
import os
from distutils.core import setup
README = open(os.path.join(os.path.dirname(__file__), 'README.rst')).read()
# allow setup.py to be run from any path
os.chdir(os.path.normpath(os.path.join(os.path.abspath(__file__), os.pardir)))
setup(
name='santaclara_third',
version='0.11.1',
packages=['santaclara_third'],
package_data={'santaclara_third': [
'static/css/*.css',
'static/css/images/*',
'static/fonts/*',
'static/js/ace/snippets/*.js',
'static/js/ace/*.js',
'static/js/*.js',
]},
include_package_data=True,
license='GNU General Public License v3 or later (GPLv3+)', # example license
description='A Django app for third part software',
long_description=README,
url='http://www.gianoziaorientale.org/software/',
author='Gianozia Orientale',
author_email='chiara@gianziaorientale.org',
classifiers=[
'Environment :: Web Environment',
'Framework :: Django',
'Intended Audience :: Developers',
'License :: OSI Approved :: GNU General Public License v3 or later (GPLv3+)', # example license
'Operating System :: OS Independent',
'Programming Language :: Python',
'Programming Language :: Python :: 2.6',
'Programming Language :: Python :: 2.7',
'Topic :: Internet :: WWW/HTTP',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
],
)
|
Monday, 25 December 2018 to Saturday, 5 January 2019.
Our next exhibition “Craft Next” starts from Sunday 6 January. We wish you all the happy holidays, and look forward to welcoming you at the upcoming exhibitions.
Our current exhibition “Habu and Mongoose” ends on Sunday 24 December, and the next exhibition “Craft Next” starts from Saturday 6 January. We wish you all the happy holidays, and look forward to welcoming you at the exhibitions.
|
"""
Validate changes to an XBlock before it is updated.
"""
from collections import Counter
from submissions.api import MAX_TOP_SUBMISSIONS
from openassessment.assessment.serializers import rubric_from_dict, InvalidRubric
from openassessment.assessment.api.student_training import validate_training_examples
from openassessment.xblock.resolve_dates import resolve_dates, DateValidationError, InvalidDateFormat
from openassessment.xblock.data_conversion import convert_training_examples_list_to_dict
def _match_by_order(items, others):
"""
Given two lists of dictionaries, each containing "order_num" keys,
return a set of tuples, where the items in the tuple are dictionaries
with the same "order_num" keys.
Args:
items (list of dict): Items to match, each of which must contain a "order_num" key.
others (list of dict): Items to match, each of which must contain a "order_num" key.
Returns:
list of tuples, each containing two dictionaries
Raises:
IndexError: A dictionary does no contain a 'order_num' key.
"""
# Sort each dictionary by its "name" key, then zip them and return
key_func = lambda x: x['order_num']
return zip(sorted(items, key=key_func), sorted(others, key=key_func))
def _duplicates(items):
"""
Given an iterable of items, return a set of duplicate items in the list.
Args:
items (list): The list of items, which may contain duplicates.
Returns:
set: The set of duplicate items in the list.
"""
counts = Counter(items)
return set(x for x in items if counts[x] > 1)
def _is_valid_assessment_sequence(assessments):
"""
Check whether the sequence of assessments is valid. The rules enforced are:
-must have one of staff-, peer-, self-, or example-based-assessment listed
-in addition to those, only student-training is a valid entry
-no duplicate entries
-if staff-assessment is present, it must come last
-if example-based-assessment is present, it must come first
-if student-training is present, it must be followed at some point by peer-assessment
Args:
assessments (list of dict): List of assessment dictionaries.
Returns:
bool
"""
sequence = [asmnt.get('name') for asmnt in assessments]
required = ['example-based-assessment', 'staff-assessment', 'peer-assessment', 'self-assessment']
optional = ['student-training']
# at least one of required?
if not any(name in required for name in sequence):
return False
# nothing except what appears in required or optional
if any(name not in required + optional for name in sequence):
return False
# no duplicates
if any(sequence.count(name) > 1 for name in sequence):
return False
# if using staff-assessment, it must come last
if 'staff-assessment' in sequence and 'staff-assessment' != sequence[-1]:
return False
# if using example-based, it must be first
if 'example-based-assessment' in sequence and 'example-based-assessment' != sequence[0]:
return False
# if using training, must be followed by peer at some point
if 'student-training' in sequence:
train_index = sequence.index('student-training')
if 'peer-assessment' not in sequence[train_index:]:
return False
return True
def validate_assessments(assessments, current_assessments, is_released, _):
"""
Check that the assessment dict is semantically valid. See _is_valid_assessment_sequence()
above for a description of valid assessment sequences. In addition, enforces validation
of several assessment-specific settings.
If a question has been released, the type and number of assessment steps
cannot be changed.
Args:
assessments (list of dict): list of serialized assessment models.
current_assessments (list of dict): list of the current serialized
assessment models. Used to determine if the assessment configuration
has changed since the question had been released.
is_released (boolean) : True if the question has been released.
_ (function): The service function used to get the appropriate i18n text
Returns:
tuple (is_valid, msg) where
is_valid is a boolean indicating whether the assessment is semantically valid
and msg describes any validation errors found.
"""
if len(assessments) == 0:
return False, _("This problem must include at least one assessment.")
# Ensure that we support this sequence of assessments.
if not _is_valid_assessment_sequence(assessments):
msg = _("The assessment order you selected is invalid.")
return False, msg
for assessment_dict in assessments:
# Number you need to grade is >= the number of people that need to grade you
if assessment_dict.get('name') == 'peer-assessment':
must_grade = assessment_dict.get('must_grade')
must_be_graded_by = assessment_dict.get('must_be_graded_by')
if must_grade is None or must_grade < 1:
return False, _('In peer assessment, the "Must Grade" value must be a positive integer.')
if must_be_graded_by is None or must_be_graded_by < 1:
return False, _('In peer assessment, the "Graded By" value must be a positive integer.')
if must_grade < must_be_graded_by:
return False, _(
'In peer assessment, the "Must Grade" value must be greater than or equal to the "Graded By" value.'
)
# Student Training must have at least one example, and all
# examples must have unique answers.
if assessment_dict.get('name') == 'student-training':
answers = []
examples = assessment_dict.get('examples')
if not examples:
return False, _('You must provide at least one example response for learner training.')
for example in examples:
if example.get('answer') in answers:
return False, _('Each example response for learner training must be unique.')
answers.append(example.get('answer'))
# Example-based assessment MUST specify 'ease' or 'fake' as the algorithm ID,
# at least for now. Later, we may make this more flexible.
if assessment_dict.get('name') == 'example-based-assessment':
if assessment_dict.get('algorithm_id') not in ['ease', 'fake']:
return False, _('The "algorithm_id" value must be set to "ease" or "fake"')
# Staff grading must be required if it is the only step
if assessment_dict.get('name') == 'staff-assessment' and len(assessments) == 1:
required = assessment_dict.get('required')
if not required: # Captures both None and explicit False cases, both are invalid
return False, _('The "required" value must be true if staff assessment is the only step.')
if is_released:
if len(assessments) != len(current_assessments):
return False, _("The number of assessments cannot be changed after the problem has been released.")
names = [assessment.get('name') for assessment in assessments]
current_names = [assessment.get('name') for assessment in current_assessments]
if names != current_names:
return False, _("The assessment type cannot be changed after the problem has been released.")
return True, u''
def validate_rubric(rubric_dict, current_rubric, is_released, is_example_based, _):
"""
Check that the rubric is semantically valid.
Args:
rubric_dict (dict): Serialized Rubric model representing the updated state of the rubric.
current_rubric (dict): Serialized Rubric model representing the current state of the rubric.
is_released (bool): True if and only if the problem has been released.
is_example_based (bool): True if and only if this is an example-based assessment.
_ (function): The service function used to get the appropriate i18n text
Returns:
tuple (is_valid, msg) where
is_valid is a boolean indicating whether the assessment is semantically valid
and msg describes any validation errors found.
"""
try:
rubric_from_dict(rubric_dict)
except InvalidRubric:
return False, _(u'This rubric definition is not valid.')
for criterion in rubric_dict['criteria']:
# No duplicate option names within a criterion
duplicates = _duplicates([option['name'] for option in criterion['options']])
if len(duplicates) > 0:
msg = _(u"Options in '{criterion}' have duplicate name(s): {duplicates}").format(
criterion=criterion['name'], duplicates=", ".join(duplicates)
)
return False, msg
# Some criteria may have no options, just written feedback.
# In this case, written feedback must be required (not optional or disabled).
if len(criterion['options']) == 0 and criterion.get('feedback', 'disabled') != 'required':
msg = _(u'Criteria with no options must require written feedback.')
return False, msg
# Example-based assessments impose the additional restriction
# that the point values for options must be unique within
# a particular rubric criterion.
if is_example_based:
duplicates = _duplicates([option['points'] for option in criterion['options']])
if len(duplicates) > 0:
msg = _(u"Example-based assessments cannot have duplicate point values.")
return False, msg
# After a problem is released, authors are allowed to change text,
# but nothing that would change the point value of a rubric.
if is_released:
# Number of prompts must be the same
if len(rubric_dict['prompts']) != len(current_rubric['prompts']):
return False, _(u'Prompts cannot be created or deleted after a problem is released.')
# Number of criteria must be the same
if len(rubric_dict['criteria']) != len(current_rubric['criteria']):
return False, _(u'The number of criteria cannot be changed after a problem is released.')
# Criteria names must be the same
# We use criteria names as unique identifiers (unfortunately)
# throughout the system. Changing them mid-flight can cause
# the grade page, for example, to raise 500 errors.
# When we implement non-XML authoring, we might be able to fix this
# the right way by assigning unique identifiers for criteria;
# but for now, this is the safest way to avoid breaking problems
# post-release.
current_criterion_names = set(criterion.get('name') for criterion in current_rubric['criteria'])
new_criterion_names = set(criterion.get('name') for criterion in rubric_dict['criteria'])
if current_criterion_names != new_criterion_names:
return False, _(u'Criteria names cannot be changed after a problem is released')
# Number of options for each criterion must be the same
for new_criterion, old_criterion in _match_by_order(rubric_dict['criteria'], current_rubric['criteria']):
if len(new_criterion['options']) != len(old_criterion['options']):
return False, _(u'The number of options cannot be changed after a problem is released.')
else:
for new_option, old_option in _match_by_order(new_criterion['options'], old_criterion['options']):
if new_option['points'] != old_option['points']:
return False, _(u'Point values cannot be changed after a problem is released.')
return True, u''
def validate_dates(start, end, date_ranges, _):
"""
Check that start and due dates are valid.
Args:
start (str): ISO-formatted date string indicating when the problem opens.
end (str): ISO-formatted date string indicating when the problem closes.
date_ranges (list of tuples): List of (start, end) pair for each submission / assessment.
_ (function): The service function used to get the appropriate i18n text
Returns:
tuple (is_valid, msg) where
is_valid is a boolean indicating whether the assessment is semantically valid
and msg describes any validation errors found.
"""
try:
resolve_dates(start, end, date_ranges, _)
except (DateValidationError, InvalidDateFormat) as ex:
return False, unicode(ex)
else:
return True, u''
def validate_assessment_examples(rubric_dict, assessments, _):
"""
Validate assessment training examples.
Args:
rubric_dict (dict): The serialized rubric model.
assessments (list of dict): List of assessment dictionaries.
_ (function): The service function used to get the appropriate i18n text
Returns:
tuple (is_valid, msg) where
is_valid is a boolean indicating whether the assessment is semantically valid
and msg describes any validation errors found.
"""
for asmnt in assessments:
if asmnt['name'] == 'student-training' or asmnt['name'] == 'example-based-assessment':
examples = convert_training_examples_list_to_dict(asmnt['examples'])
# Must have at least one training example
if len(examples) == 0:
return False, _(
u"Learner training and example-based assessments must have at least one training example."
)
# Delegate to the student training API to validate the
# examples against the rubric.
errors = validate_training_examples(rubric_dict, examples)
if errors:
return False, "; ".join(errors)
return True, u''
def validator(oa_block, _, strict_post_release=True):
"""
Return a validator function configured for the XBlock.
This will validate assessments, rubrics, and dates.
Args:
oa_block (OpenAssessmentBlock): The XBlock being updated.
_ (function): The service function used to get the appropriate i18n text
Keyword Arguments:
strict_post_release (bool): If true, restrict what authors can update once
a problem has been released.
Returns:
callable, of a form that can be passed to `update_from_xml`.
"""
def _inner(rubric_dict, assessments, leaderboard_show=0, submission_start=None, submission_due=None):
is_released = strict_post_release and oa_block.is_released()
# Assessments
current_assessments = oa_block.rubric_assessments
success, msg = validate_assessments(assessments, current_assessments, is_released, _)
if not success:
return False, msg
# Rubric
is_example_based = 'example-based-assessment' in [asmnt.get('name') for asmnt in assessments]
current_rubric = {
'prompts': oa_block.prompts,
'criteria': oa_block.rubric_criteria
}
success, msg = validate_rubric(rubric_dict, current_rubric, is_released, is_example_based, _)
if not success:
return False, msg
# Training examples
success, msg = validate_assessment_examples(rubric_dict, assessments, _)
if not success:
return False, msg
# Dates
submission_dates = [(submission_start, submission_due)]
assessment_dates = [(asmnt.get('start'), asmnt.get('due')) for asmnt in assessments]
success, msg = validate_dates(oa_block.start, oa_block.due, submission_dates + assessment_dates, _)
if not success:
return False, msg
# Leaderboard
if leaderboard_show < 0 or leaderboard_show > MAX_TOP_SUBMISSIONS:
return False, _("Leaderboard number is invalid.")
# Success!
return True, u''
return _inner
def validate_submission(submission, prompts, _, text_response='required'):
"""
Validate submission dict.
Args:
submission (list of unicode): Responses for the prompts.
prompts (list of dict): The prompts from the problem definition.
_ (function): The service function used to get the appropriate i18n text.
Returns:
tuple (is_valid, msg) where
is_valid is a boolean indicating whether the submission is semantically valid
and msg describes any validation errors found.
"""
message = _(u"The submission format is invalid.")
if type(submission) != list:
return False, message
if text_response == 'required' and len(submission) != len(prompts):
return False, message
for submission_part in submission:
if type(submission_part) != unicode:
return False, message
return True, u''
|
Hearts with bows pendant, silverware silverplate, antique silver, US made, heart, charm, pendant, nickel free. Measures about 29 x 27mm. Made from the old vintage tooling, silverware silverplate plating over brass. Sold by the piece.
|
#!/usr/bin/env/python
# -*- coding: utf-8 -*-
# vim: tabstop=4 shiftwidth=4 softtabstop=4
#
# LICENSE
#
# Copyright (c) 2010-2013, GEM Foundation, G. Weatherill, M. Pagani,
# D. Monelli.
#
# The Hazard Modeller's Toolkit is free software: you can redistribute
# it and/or modify it under the terms of the GNU Affero General Public
# License as published by the Free Software Foundation, either version
# 3 of the License, or (at your option) any later version.
#
# You should have received a copy of the GNU Affero General Public License
# along with OpenQuake. If not, see <http://www.gnu.org/licenses/>
#
# DISCLAIMER
#
# The software Hazard Modeller's Toolkit (hmtk) provided herein
# is released as a prototype implementation on behalf of
# scientists and engineers working within the GEM Foundation (Global
# Earthquake Model).
#
# It is distributed for the purpose of open collaboration and in the
# hope that it will be useful to the scientific, engineering, disaster
# risk and software design communities.
#
# The software is NOT distributed as part of GEM's OpenQuake suite
# (http://www.globalquakemodel.org/openquake) and must be considered as a
# separate entity. The software provided herein is designed and implemented
# by scientific staff. It is not developed to the design standards, nor
# subject to same level of critical review by professional software
# developers, as GEM's OpenQuake software suite.
#
# Feedback and contribution to the software is welcome, and can be
# directed to the hazard scientific staff of the GEM Model Facility
# (hazard@globalquakemodel.org).
#
# The Hazard Modeller's Toolkit (hmtk) is therefore distributed WITHOUT
# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
# FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
# for more details.
#
# The GEM Foundation, and the authors of the software, assume no
# liability for use of the software.
# -*- coding: utf-8 -*-
'''
Defines the :class hmtk.sources.mtk_area_source.mtkAreaSource which represents
the hmtk defition of an area source. This extends the :class:
nrml.models.AreaSource
'''
import warnings
import numpy as np
from openquake.nrmllib import models
from openquake.hazardlib.geo.point import Point
from openquake.hazardlib.geo.polygon import Polygon
from openquake.hazardlib.source.area import AreaSource
import hmtk.sources.source_conversion_utils as conv
class mtkAreaSource(object):
'''
Describes the Area Source
:param str identifier:
ID code for the source
:param str name:
Source name
:param str trt:
Tectonic region type
:param geometry:
Instance of :class: nhlib.geo.polygon.Polygon class
:param float upper_depth:
Upper seismogenic depth (km)
:param float lower_depth:
Lower seismogenic depth (km)
:param str mag_scale_rel:
Magnitude scaling relationsip
:param float rupt_aspect_ratio:
Rupture aspect ratio
:param mfd:
Magnitude frequency distribution as instance of
:class: nrml.models.IncrementalMFD or
:class: nrml.models.TGRMFD
:param list nodal_plane_dist:
List of :class: nrml.models.NodalPlane objects representing
nodal plane distribution
:param list hypo_depth_dist:
List of :class: nrml.models.HypocentralDepth instances describing
the hypocentral depth distribution
:param catalogue:
Earthquake catalogue associated to source as instance of
hmtk.seismicity.catalogue.Catalogue object
'''
def __init__(self, identifier, name, trt=None, geometry=None,
upper_depth=None, lower_depth=None, mag_scale_rel=None,
rupt_aspect_ratio=None, mfd=None, nodal_plane_dist=None,
hypo_depth_dist=None):
'''
Instantiates class with two essential attributes: identifier and name
'''
self.typology = 'Area'
self.id = identifier
self.name = name
self.trt = trt
self.geometry = geometry
self.upper_depth = upper_depth
self.lower_depth = lower_depth
self.mag_scale_rel = mag_scale_rel
self.rupt_aspect_ratio = rupt_aspect_ratio
self.mfd = mfd
self.nodal_plane_dist = nodal_plane_dist
self.hypo_depth_dist = hypo_depth_dist
# Check consistency of hypocentral depth inputs
self._check_seismogenic_depths(upper_depth, lower_depth)
self.catalogue = None
def create_geometry(self, input_geometry, upper_depth, lower_depth):
'''
If geometry is defined as a numpy array then create instance of
nhlib.geo.polygon.Polygon class, otherwise if already instance of class
accept class
:param input_geometry:
Input geometry (polygon) as either
i) instance of nhlib.geo.polygon.Polygon class
ii) numpy.ndarray [Longitude, Latitude]
:param float upper_depth:
Upper seismogenic depth (km)
:param float lower_depth:
Lower seismogenic depth (km)
'''
self._check_seismogenic_depths(upper_depth, lower_depth)
# Check/create the geometry class
if not isinstance(input_geometry, Polygon):
if not isinstance(input_geometry, np.ndarray):
raise ValueError('Unrecognised or unsupported geometry '
'definition')
if np.shape(input_geometry)[0] < 3:
raise ValueError('Incorrectly formatted polygon geometry -'
' needs three or more vertices')
geometry = []
for row in input_geometry:
geometry.append(Point(row[0], row[1], self.upper_depth))
self.geometry = Polygon(geometry)
else:
self.geometry = input_geometry
def _check_seismogenic_depths(self, upper_depth, lower_depth):
'''
Checks the seismic depths for physical consistency
:param float upper_depth:
Upper seismogenic depth (km)
:param float lower_depth:
Lower seismogenis depth (km)
'''
# Simple check on depths
if upper_depth:
if upper_depth < 0.:
raise ValueError('Upper seismogenic depth must be greater than'
' or equal to 0.0!')
else:
self.upper_depth = upper_depth
else:
self.upper_depth = 0.0
if lower_depth:
if lower_depth < self.upper_depth:
raise ValueError('Lower seismogenic depth must take a greater'
' value than upper seismogenic depth')
else:
self.lower_depth = lower_depth
else:
self.lower_depth = np.inf
def select_catalogue(self, selector, distance=None):
'''
Selects the catalogue of earthquakes attributable to the source
:param selector:
Populated instance of hmtk.seismicity.selector.CatalogueSelector
class
:param float distance:
Distance (in km) to extend or contract (if negative) the zone for
selecting events
'''
if selector.catalogue.get_number_events() < 1:
raise ValueError('No events found in catalogue!')
self.catalogue = selector.within_polygon(self.geometry,
distance,
upper_depth=self.upper_depth,
lower_depth=self.lower_depth)
if self.catalogue.get_number_events() < 5:
# Throw a warning regarding the small number of earthquakes in
# the source!
warnings.warn('Source %s (%s) has fewer than 5 events'
% (self.id, self.name))
def create_oqnrml_source(self, use_defaults=False):
'''
Converts the source model into an instance of the :class:
openquake.nrmllib.models.AreaSource
:param bool use_defaults:
If set to true, will use put in default values for magitude
scaling relation, rupture aspect ratio, nodal plane distribution
or hypocentral depth distribution where missing. If set to False
then value errors will be raised when information is missing.
'''
area_geometry = models.AreaGeometry(self.geometry.wkt,
self.upper_depth,
self.lower_depth)
return models.AreaSource(
self.id,
self.name,
self.trt,
area_geometry,
conv.render_mag_scale_rel(self.mag_scale_rel, use_defaults),
conv.render_aspect_ratio(self.rupt_aspect_ratio, use_defaults),
conv.render_mfd(self.mfd),
conv.render_npd(self.nodal_plane_dist, use_defaults),
conv.render_hdd(self.hypo_depth_dist, use_defaults))
def create_oqhazardlib_source(self, tom, mesh_spacing, area_discretisation,
use_defaults=False):
"""
Converts the source model into an instance of the :class:
openquake.hazardlib.source.area.AreaSource
:param tom:
Temporal Occurrence model as instance of :class:
openquake.hazardlib.tom.TOM
:param float mesh_spacing:
Mesh spacing
"""
return AreaSource(
self.id,
self.name,
self.trt,
conv.mfd_to_hazardlib(self.mfd),
mesh_spacing,
conv.mag_scale_rel_to_hazardlib(self.mag_scale_rel, use_defaults),
conv.render_aspect_ratio(self.rupt_aspect_ratio, use_defaults),
tom,
self.upper_depth,
self.lower_depth,
conv.npd_to_pmf(self.nodal_plane_dist, use_defaults),
conv.hdd_to_pmf(self.hypo_depth_dist, use_defaults),
self.geometry,
area_discretisation)
|
Did you ever receive a call or text message from number (405)-530-6028? If yes, can you please leave your comment below to help us in determining who is the person calling from this particular phone number?
Comment: Comment: Did you ever receive an unexpected phone call, text message, or fax from number 4055306028? Please do leave your personal comment about this number below.
|
'''
You are a professional robber planning to rob houses along a street. Each house has a certain amount of money stashed. All houses at this place are arranged in a circle. That means the first house is the neighbor of the last one. Meanwhile, adjacent houses have security system connected and it will automatically contact the police if two adjacent houses were broken into on the same night.
Given a list of non-negative integers representing the amount of money of each house, determine the maximum amount of money you can rob tonight without alerting the police.
Example 1:
Input: [2,3,2]
Output: 3
Explanation: You cannot rob house 1 (money = 2) and then rob house 3 (money = 2),
because they are adjacent houses.
Example 2:
Input: [1,2,3,1]
Output: 4
Explanation: Rob house 1 (money = 1) and then rob house 3 (money = 3).
Total amount you can rob = 1 + 3 = 4.
'''
# We build on the solution to the original house robber case:
def rob(houses):
'''
This function outputs the maximum amount of money we can get from robbing a list
of houses whose values are the money. To do this, we will find the max amount of money
to rob sublists houses[0:1], houses[0:2], ..., houses[0:k+1],..., houses[0:len(houses)]
Let f(k) be the maximum money for robbing houses[0:k+1]. In other words,
f(k) := rob(houses[0:k]). Notice that we have this relationship:
f(k) == max( houses[k] + f(k-2), f(k-1) )
This relationship holds because the maximum money for robbing
houses 0 through k is either the maximum money for robbing houses
0 through k-2 plus robbing house k (remember, the houses can't be adjacent),
or, if house k isn't that much money and house k-1 is, we might get the maximum money from robbing
houses 0 through k-1 (which puts house k off limits due to the no-adjacency rule).
Notice that to compute f(k), we only need the values f(k-2) and f(k-1), much like fibonacci.
So our memo will consist of two variables that keep track of these values and update as k goes from
0 to len(houses)-1.
'''
# Let's handle the case where the house list is empty
if not houses:
return 0
# Let's handle the cases where the houses list is only one house
if len(houses) == 1:
return houses[0]
# Let's handle the cases where the houses list is only two houses
if len(houses) == 2:
return max(houses[0],houses[1])
# initialize f(k-2) to f(0), where our sublist of houses is just the value of the first house.
fk_minus2 = houses[0]
# initialize f(k-1) to f(1), which is the max money for houses[0:3], the first two houses.
# We just take the max of these two house values.
fk_minus1 = max(houses[0], houses[1])
# now we march through the list:houses from position 2 onward, updating f(k-2), f(k-1)
# along the way
for house in houses[2:]:
# The max value we can get robbing houses up to and including this current house is
# either this house plus the max value up to 2 houses ago, or the max value up to the last house
fk = max( house + fk_minus2, fk_minus1)
# increment k
fk_minus2, fk_minus1 = fk_minus1, fk
# At this point, k has reached the end of the list, and so fk represents the maximum money from robbing
# the entire list of houses, which is the return value of our rob function.
return fk
# Now we make a new function
def rob_two(houses):
# since the first and last houses are adjacent,
# we look at how much we can get from robbing all the houses but the first
# and then compare that to what we can get from robbing all the houses but the last.
# This works because we can't rob the first and last house, since they are adjacent.
return max(rob(houses[1:]), rob(houses[:-1]))
l1 = [2,3,2]
l2 = [1,2,3,1]
l3 = [5,2,1,4]
l4 = [5,1,1,3,2,6]
print(rob(l4))
print(rob_two(l4))
|
Walker Brothers Insurance provides insurance to printers and publishers in Springdale, Fayetteville, Rogers, Bentonville, Siloam Springs, Lowell, and surrounding areas.
Printing and publishing companies rely on precision, in both the speed of delivery and the accuracy of the work. Such precision requires sophisticated equipment and services. Walker Brothers Insurance understands your need to be well protected against the numerous risks that come with running your own printing and publishing business. Our goal is to reduce those risks and alleviate any unnecessary stress you may have so you can concentrate on running a successful business.
Call today and set up a consultation with an insurance specialist at Walker Brothers Insurance.
|
# -*- coding: utf-8 -*-
#
# Copyright (C) 2005-2010, TUBITAK/UEKAE
#
# This program is free software; you can redistribute it and/or modify it under
# the terms of the GNU General Public License as published by the Free
# Software Foundation; either version 2 of the License, or (at your option)
# any later version.
#
# Please read the COPYING file.
#
"""package abstraction methods to add/remove files, extract control files"""
import os.path
import gettext
__trans = gettext.translation('pisi', fallback=True)
_ = __trans.ugettext
import pisi
import pisi.context as ctx
import pisi.archive as archive
import pisi.uri
import pisi.metadata
import pisi.file
import pisi.files
import pisi.util as util
import fetcher
class Error(pisi.Error):
pass
class Package:
"""PiSi Package Class provides access to a pisi package (.pisi
file)."""
formats = ("1.0", "1.1", "1.2")
default_format = "1.2"
@staticmethod
def archive_name_and_format(package_format):
if package_format == "1.2":
archive_format = "tarxz"
archive_suffix = ctx.const.xz_suffix
elif package_format == "1.1":
archive_format = "tarlzma"
archive_suffix = ctx.const.lzma_suffix
else:
# "1.0" format does not have an archive
return (None, None)
archive_name = ctx.const.install_tar + archive_suffix
return archive_name, archive_format
def __init__(self, packagefn, mode='r', format=None, tmp_dir=None):
self.filepath = packagefn
url = pisi.uri.URI(packagefn)
if url.is_remote_file():
self.fetch_remote_file(url)
try:
self.impl = archive.ArchiveZip(self.filepath, 'zip', mode)
except IOError, e:
raise Error(_("Cannot open package file: %s") % e)
self.install_archive = None
if mode == "r":
self.metadata = self.get_metadata()
format = self.metadata.package.packageFormat
# Many of the old packages do not contain format information
# because of a bug in old Pisi versions. This is a workaround
# to guess their package format.
if format is None:
archive_name = ctx.const.install_tar + ctx.const.lzma_suffix
if self.impl.has_file(archive_name):
format = "1.1"
else:
format = "1.0"
self.format = format or Package.default_format
if self.format not in Package.formats:
raise Error(_("Unsupported package format: %s") % format)
self.tmp_dir = tmp_dir or ctx.config.tmp_dir()
def fetch_remote_file(self, url):
dest = ctx.config.cached_packages_dir()
self.filepath = os.path.join(dest, url.filename())
if not os.path.exists(self.filepath):
try:
pisi.file.File.download(url, dest)
except pisi.fetcher.FetchError:
# Bug 3465
if ctx.get_option('reinstall'):
raise Error(_("There was a problem while fetching '%s'.\nThe package "
"may have been upgraded. Please try to upgrade the package.") % url);
raise
else:
ctx.ui.info(_('%s [cached]') % url.filename())
def add_to_package(self, fn, an=None):
"""Add a file or directory to package"""
self.impl.add_to_archive(fn, an)
def add_to_install(self, name, arcname=None):
"""Add the file 'name' to the install archive"""
if arcname is None:
arcname = name
if self.format == "1.0":
arcname = util.join_path("install", arcname)
self.add_to_package(name, arcname)
return
if self.install_archive is None:
archive_name, archive_format = \
self.archive_name_and_format(self.format)
self.install_archive_path = util.join_path(self.tmp_dir,
archive_name)
ctx.build_leftover = self.install_archive_path
self.install_archive = archive.ArchiveTar(
self.install_archive_path,
archive_format)
self.install_archive.add_to_archive(name, arcname)
def add_metadata_xml(self, path):
self.metadata = pisi.metadata.MetaData()
self.metadata.read(path)
self.add_to_package(path, ctx.const.metadata_xml)
def add_files_xml(self, path):
self.files = pisi.files.Files()
self.files.read(path)
self.add_to_package(path, ctx.const.files_xml)
def close(self):
"""Close the package archive"""
if self.install_archive:
self.install_archive.close()
arcpath = self.install_archive_path
arcname = os.path.basename(arcpath)
self.add_to_package(arcpath, arcname)
self.impl.close()
if self.install_archive:
os.unlink(self.install_archive_path)
ctx.build_leftover = None
def get_install_archive(self):
archive_name, archive_format = \
self.archive_name_and_format(self.format)
if archive_name is None or not self.impl.has_file(archive_name):
return
archive_file = self.impl.open(archive_name)
tar = archive.ArchiveTar(fileobj=archive_file,
arch_type=archive_format,
no_same_permissions=False,
no_same_owner=False)
return tar
def extract(self, outdir):
"""Extract entire package contents to directory"""
self.extract_dir('', outdir) # means package root
def extract_files(self, paths, outdir):
"""Extract paths to outdir"""
self.impl.unpack_files(paths, outdir)
def extract_file(self, path, outdir):
"""Extract file with path to outdir"""
self.extract_files([path], outdir)
def extract_file_synced(self, path, outdir):
"""Extract file with path to outdir"""
data = self.impl.read_file(path)
fpath = util.join_path(outdir, path)
util.ensure_dirs(os.path.dirname(fpath))
with open(fpath, "wb") as f:
f.write(data)
f.flush()
os.fsync(f.fileno())
def extract_dir(self, dir, outdir):
"""Extract directory recursively, this function
copies the directory archiveroot/dir to outdir"""
self.impl.unpack_dir(dir, outdir)
def extract_install(self, outdir):
def callback(tarinfo, extracted):
if not extracted:
# Installing packages (especially shared libraries) is a
# bit tricky. You should also change the inode if you
# change the file, cause the file is opened allready and
# accessed. Removing and creating the file will also
# change the inode and will do the trick (in fact, old
# file will be deleted only when its closed).
#
# Also, tar.extract() doesn't write on symlinks... Not any
# more :).
if os.path.isfile(tarinfo.name) or os.path.islink(tarinfo.name):
try:
os.unlink(tarinfo.name)
except OSError, e:
ctx.ui.warning(e)
else:
# Added for package-manager
if tarinfo.name.endswith(".desktop"):
ctx.ui.notify(pisi.ui.desktopfile, desktopfile=tarinfo.name)
tar = self.get_install_archive()
if tar:
tar.unpack_dir(outdir, callback=callback)
else:
self.extract_dir_flat('install', outdir)
def extract_dir_flat(self, dir, outdir):
"""Extract directory recursively, this function
unpacks the *contents* of directory archiveroot/dir inside outdir
this is the function used by the installer"""
self.impl.unpack_dir_flat(dir, outdir)
def extract_to(self, outdir, clean_dir = False):
"""Extracts contents of the archive to outdir. Before extracting if clean_dir
is set, outdir is deleted with its contents"""
self.impl.unpack(outdir, clean_dir)
def extract_pisi_files(self, outdir):
"""Extract PiSi control files: metadata.xml, files.xml,
action scripts, etc."""
self.extract_files([ctx.const.metadata_xml, ctx.const.files_xml], outdir)
self.extract_dir('config', outdir)
def get_metadata(self):
"""reads metadata.xml from the PiSi package and returns MetaData object"""
md = pisi.metadata.MetaData()
md.parse(self.impl.read_file(ctx.const.metadata_xml))
return md
def get_files(self):
"""reads files.xml from the PiSi package and returns Files object"""
files = pisi.files.Files()
files.parse(self.impl.read_file(ctx.const.files_xml))
return files
def read(self):
self.files = self.get_files()
self.metadata = self.get_metadata()
def pkg_dir(self):
packageDir = self.metadata.package.name + '-' \
+ self.metadata.package.version + '-' \
+ self.metadata.package.release
return os.path.join(ctx.config.packages_dir(), packageDir)
def comar_dir(self):
return os.path.join(self.pkg_dir(), ctx.const.comar_dir)
@staticmethod
def is_cached(packagefn):
url = pisi.uri.URI(packagefn)
filepath = packagefn
if url.is_remote_file():
filepath = os.path.join(ctx.config.cached_packages_dir(), url.filename())
return os.path.exists(filepath) and filepath
else:
return filepath
|
Home > michelle massaro > Tournament of Champions: Week One, Clash #2 Authors revealed!
Tournament of Champions: Week One, Clash #2 Authors revealed!
If you haven't already voted, now's the time! Today is the final day to swing the results the way you're hoping.
Twenty-six-year-old Cammie O’Shea suffers through a traumatic split-up with her fiancé and moves to Destin, Florida. Determined not to be hurt again, she avoids relationships and yearns to be near friends and family. She needs God more than ever, but feels estranged from Him. Attempting to re-build her faith, she turns to the Bible. One day she reads Romans 8:28, “And we know that in all things God works for the good of those who love him…” She ponders how living in Destin possibly could be good for her. The success of the new paper where she works hinges on her interview with Vic Deleona, a real estate developer. He thwarts her efforts to complete an article she’s writing about him, arranges extra meetings and attempts to court her. She resists his advances. Then mysterious break-ins occur at Cammie and her friend’s condos. When Cammie and Vic launch their own investigation into the vandalism, Cammie grows close to Vic. In the midst of the confusion she gets an opportunity to return home to her old job. Will Vic solve the crimes and win Cammie’s heart or will she leave Destin?
Gail’s husband, Rick, says she’s the only person he knows who can go in the grocery for a loaf of bread and come out with the cashier’s life story. That’s probably because she enjoys talking to people. In her spare time she swims or bargain shops with her daughter. Sometimes they try on garments so wrong for them, they laugh for fifteen minutes. When they finally find a treasure, they’re so pleased.
After writing articles for years Gail’s friends and family encouraged her to write books.
In 2004, the year her first book, Now Is the Time, was released, the American Christian Writers Association named her a regional writer of the year. Last year a scene from Gail’s romance e-book, Love Turns the Tide, won the Clash of the Titles competition in the best nature / weather category.
Gail is a member of ACFW and is on staff with Clash of the Titles. She wants to write books of faith that show God’s love.
Clare lives in a small town in England with her husband of 19 years and her three children. Writing from a early childhood and encouraged by her teachers, she graduated from rewriting fairy stories through fanfiction to using her own original characters and enjoys writing an eclectic mix of romance, crime fiction and children's stories. When she's not writing, reading, sewing or keeping house or doing the many piles of laundry her children manage to make, she's working part time in the breakfast club at one of the local schools.
Vote, then come back tomorrow for the results of both Clashes AND reader games!
|
# -*- Mode: Python; coding: utf-8; indent-tabs-mode: nil; tab-width: 4 -*-
#
# Copyright (c) 2016 Cédric Clerget - HPC Center of Franche-Comté University
#
# This file is part of Janua-SMS
#
# http://github.com/mesocentrefc/Janua-SMS
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation v2.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
import json
from janua.actions.action import Action
from janua.actions.action import argument
from janua.ws import json_error, json_success
from janua.ws.services import urlconfig
from janua.ws.auth import auth_manager, AuthConfigError
class SetAuthBackend(Action):
"""
Set authentication backend configuration
Sample request to set configuration for LDAP backend:
.. code-block:: javascript
POST /set_auth_backend/ldap HTTP/1.1
Host: janua.mydomain.com
Content-Type: application/json
JanuaAuthToken: abcdef123456789
{
"parameters":
{
"ldap_uri": "ldap://ldap.example.fr:389",
"ldap_bind_dn": "uid=${login},ou=people,dc=example,dc=fr",
"ldap_tls_support": false
},
"success": true
}
Sample response:
.. code-block:: javascript
HTTP/1.1 200
{
"message": "Configuration has been saved",
"success": true
}
"""
category = '__INTERNAL__'
@urlconfig('/set_auth_backend/<backend>', role=['admin'])
def web(self, backend):
auth_backend = auth_manager.get(backend)
if auth_backend:
try:
auth_backend.update_config(self.parameters())
except AuthConfigError, err:
return json_error('Failed to update configuration: %s' % err)
return json_success('Configuration has been saved')
@argument(required=True)
def parameters(self):
"""Backend config parameters"""
return str()
|
People often take Labor Day for granted without knowing the true essence of the festive. It is just another holiday for most students, and even for adults. It could be an extra day off, and you can indulge in a short vacation with your family.
We wish to enlighten you on how Labor Day came to be where it is today and regarding the evolution of the workforce over the years. Technology has advanced rapidly, and the recruiting strategies are always evolving. With the frequent changes in recruiting, employers can keep up on Jobstreet’s comprehensive job portal.
Labor day was the start of a new chapter in the workforce. It epitomized a catalyst for a change. People started celebrating it in early 1882, and due to the popularity of the convention, employees now celebrate it annually.
It was first introduced in the hopes to send regards and appreciation to the workers who continually show their drive and resilience in the workforce. It was first celebrated in New York City, but other countries have also adopted the same movement. So what aspects of recruitment have changed since the very beginning?
In a patriarchal built society and the stigma against working Women, this hinders them from their fullest potential.
Early in the days, Women were classified as the minorities in the workforce. Their participation rate was at its all-time-low, and researchers remained skeptical regarding the condition. However, with the rise in diversity in the workplace, the workforce is faring more successfully than before.
With the advancement in technology, more individuals are making use of the media as a means of advertising. There are a wide array of platforms available to search for your prospects.
More companies are allowing their employees to work from home at least once a week. An office environment is sometimes, not the best choice to maximize creativity.
Social Media is the most convenient platform as a form of outreach. The mass media could be a tool for leverage; you have nothing to lose.
The workforce has mostly changed for the better since its initial state. If you need more recruiting tips, feel free to check out Jobstreet’s website via https://www.jobstreet.com.ph/en/cms/employer/.
|
#This module will draw an S-curve for a set of selected profiles.
import MySQLdb as mdb
import os
import sys
import Params
from Outputs import *
def recordAttachmentParameterMappings(attachment,cur):
attachments=[attachment]
parameter_ids=[Params.min_calls_id]
print str(parameter_ids)
for attachment_id in attachments:
for param_id in parameter_ids:
query="insert into attachments_parameters (attachment_id,parameter_id) VALUES("+str(attachment_id)+","+str(param_id)+");"
cur.execute(query)
def getParameters(cur,user_id,parameter_group):
#get minimum number of calls to accept an allele
query="Select id,value from parameters where user_id="+str(user_id)+" and group_name='"+str(parameter_group)+"' and name like '%Minimum Reads per Locus%' order by id DESC limit 1;"
cur.execute(query)
values=cur.fetchone()
if values==None:
#use the default
query="select id,value from parameters where user_id=1 and name like '%Minimum Reads per Locus%';"
cur.execute(query)
values=cur.fetchone()
Params.min_calls_id=values[0]
Params.min_calls=values[1]
def usage():
print "Usage: python qcSamples.py -h <host ip> -db <database> -u <database user> -p <database password> -id_user <id of user running the analysis> -id_folder <id of folder where analysis results are to be stored> -samples [samples to analyze]"
def check_if_exists(cur,samples,locus_group):
samples.sort()
existing=[]
param_hash=str(Params.min_calls_id) +str(locus_group)
qc_name=param_hash+"qc"
for sample in samples:
qc_name=qc_name+"_"+sample
qc_name=hash(qc_name)
query="select id from images where internal_hash ="+str(qc_name)+";"
print str(query)
cur.execute(query)
id_qc=cur.fetchone()
if id_qc==None:
return []
else:
existing.append(id_qc[0])
query="select id from attachments where internal_hash ="+str(qc_name)+";"
print str(query)
cur.execute(query)
id_qc=cur.fetchone()
if id_qc==None:
return []
else:
existing.append(id_qc[0])
return existing
def parseInputs():
host=None
db=None
user=None
password=None
samples=[]
user_id=None
folder_id=None
parameter_group=None
quality=None
locus_group=None
sys.argv=[i.replace('\xe2\x80\x93','-') for i in sys.argv]
for i in range(1,len(sys.argv)):
entry=sys.argv[i]
print "entry:"+str(entry)
if entry=='-h':
host=sys.argv[i+1]
elif entry=='-db':
db=sys.argv[i+1]
elif entry=='-u':
user=sys.argv[i+1]
elif entry=='-p':
password=sys.argv[i+1]
elif entry.__contains__('-samples'):
for j in range(i+1,len(sys.argv)):
if not sys.argv[j].startswith('-'):
samples.append(sys.argv[j])
else:
break;
elif entry.__contains__('-id_user'):
user_id=sys.argv[i+1]
#print "set user id: "+str(user_id)
elif entry.__contains__('-id_folder'):
folder_id=sys.argv[i+1]
elif entry.__contains__('-parameter_group'):
parameter_group=sys.argv[i+1]
elif entry.__contains__('-quality'):
quality=int(sys.argv[i+1])
elif entry.__contains__('-locus_group'):
locus_group=int(sys.argv[i+1])
#make sure all the necessary arguments have been supplied
if host==None or db==None or user==None or password==None or len(samples)==0 or user_id==None or folder_id==None or locus_group==None or quality==None:
print "-1 incorrect parameters passed to function"
if host==None:
print " missing -host"
if db==None:
print " missing -db"
if user==None:
print " missing -u (database user)"
if password==None:
print " missing -p (database password)"
if replicates==None:
print " missing -samples (samples)"
if user_id==None:
print " missing -id_user"
if folder_id==None:
print " missing -id_folder"
if locus_group==None:
print " missing -locus_group"
if quality==None:
print "missing -quality"
#usage()
sys.exit(0)
else:
return host,db,user,password,samples,user_id,folder_id,parameter_group,locus_group,quality
def main():
host,db,user,password, samples, user_id,folder_id,parameter_group,locus_group,quality=parseInputs()
con,cur=connect(host,user,password,db);
getParameters(cur,user_id,parameter_group)
existing_id=check_if_exists(cur,samples,locus_group)
if existing_id!=[]:
print "Image already exists!"
print str(existing_id[0])+ " " + str(existing_id[1])
sys.exit(0)
#if a locus group is specified, limit the analysis only to snps within the locus group
locus_group_name="Unspecified"
locus_group_snps=[]
if locus_group!=0:
query="select locus_id from loci_loci_groups where loci_group_id="+str(locus_group)+";"
cur.execute(query)
locus_group_snps=[i[0] for i in cur.fetchall()]
query="select name from loci_groups where id="+str(locus_group)+";"
cur.execute(query)
locus_group_name=cur.fetchone()[0]
sample_to_ma=dict()
snp_to_freq=dict()
for i in range(len(samples)):
sample=samples[i]
sample_to_ma[sample]=[]
query="select minor_allele_frequency,total_count,forward_count,locus_id from calls where sample_id="+str(sample)+";"
cur.execute(query)
ma_data=cur.fetchall()
for entry in ma_data:
maf=entry[0]
counts=entry[1]
forward_counts=entry[2]
locus_id=entry[3]
if locus_id not in snp_to_freq:
snp_to_freq[locus_id]=dict()
snp_to_freq[locus_id][sample]=[maf,counts,forward_counts]
if (locus_group !=0) and locus_id not in locus_group_snps:
continue
if forward_counts <Params.min_calls:
maf=-.1*(i+1)
if (counts - forward_counts)< Params.min_calls:
maf=-.1*(i+1)
sample_to_ma[sample].append(maf)
for sample in sample_to_ma:
sample_to_ma[sample].sort()
image_id=generate_image(sample_to_ma,cur,user_id,folder_id,quality,locus_group_name,locus_group)
attachment_id=generate_attachment(sample_to_ma,cur,user_id,folder_id,parameter_group,quality,locus_group_name,locus_group_snps,locus_group,snp_to_freq)
query="update images set associated_attachment_id="+str(attachment_id)+" where id="+str(image_id)+";"
cur.execute(query)
recordAttachmentParameterMappings(attachment_id,cur)
disconnect(con,cur)
print str(image_id) + " " + str(attachment_id)
#Connect to the database
def connect(host,connecting_user,password,dbName):
try:
con = mdb.connect(host,connecting_user,password,dbName)
cur = con.cursor()
con.begin()
#Execute a test query to make sure the database connection has been successful.
return con,cur
except mdb.Error,e:
error_message = e.__str__();
print error_message
sys.exit(0)
#close connection to the database
def disconnect(con,cur):
try:
con.commit();
cur.close();
con.close();
except mdb.Error,e:
error_message=e.__str__();
print error_message
sys.exit(0)
if __name__=='__main__':
main()
|
The most interesting health benefits of jujube include its ability to treat cancer, improve the health of the skin, cleanse the blood, relieve stress, stimulate restful sleep, strengthen the immune system, protect the liver, aid in weight loss, increase bone mineral density, and detoxify the body. Jujube Jujube may sound like a funny name for a fruit, but don’t let the name fool you…this is a very powerful food that packs a healthy punch for the millions of people who know its true value. Although it has common names like red date and Korean date, the scientific classification of jujubes is Ziziphus jujuba. A jujube is typically a small shrub or tree with small yellowish-green petals and drupe fruits that are about the size of a date and range from brown to purplish-black. Jujube fruits are native to southern Asia, including southern and central China. However, it has been introduced to the rest of the world, primarily Europe, and is available in many exotic fruit import stores.
Read more of the post Health Benefits of Jujube on Organic Facts.
|
#-*- coding: utf-8 -*-
from django.urls import reverse
from django.db import models
from datetime import timedelta
class BaseModel(models.Model):
created = models.DateField (auto_now_add=True, verbose_name="création" )
last_used = models.DateField (auto_now=True, blank=True, verbose_name="dernière utilisation" )
times_used = models.IntegerField ( default=0, verbose_name="nombre d'utilisations")
class Meta:
abstract = True
class ImageUrl(BaseModel):
dwnlTime = models.DurationField(default=timedelta(0), verbose_name="temps de téléchargement")
url = models.URLField (max_length=128, unique=True, verbose_name="URL" )
def __str__(self):
return self.url
class Meta:
verbose_name = "URL d'image"
verbose_name_plural = "URLs d'images"
class DynamicImg(BaseModel):
name = models.CharField (max_length=32, blank=True, verbose_name="nom")
urls = models.ManyToManyField(ImageUrl, verbose_name="URLs")
shadowMode = models.BooleanField (default=False, verbose_name="mode discret")
def __str__(self):
return str(self.id)
def get_absolute_url(self):
return reverse('dynimg:getimg', kwargs={'id_img': self.id})
def get_urls_nb(self):
return self.urls.count()
get_urls_nb.short_description = "Nombre d'URLs"
class Meta:
verbose_name = "image dynamique"
verbose_name_plural = "images dynamiques"
|
Further Comments: This polt practical joker was reported to flush toilets while they were in use. The entity was also reported to break glasses in the bar when no one was close by.
Further Comments: Bottles and glasses would fall off shelves, doors were heard to slam shut, and footsteps were heard climbing the stairs. On several occasions a pale man wearing a long black cloak was spotted by one manager. In addition, the grey nun which haunts the Lanes has been seen just outside the inn.
Further Comments: When this building was an hotel, it was said to be haunted by a misty male figure who would sit in the chair in one room.
Further Comments: Always seen from the corner of one's eye, a good description of this spirit was never forthcoming (although it may be the ghost of a smuggler who died after tripping on the staircase in the basement). However, another ghost in the building moves slightly slower - a woman in a red dress has been seen on one occasion in vicinity of the main bar.
Further Comments: Tim was a brewery worker who was crushed to death on this site. After the cinema took over the site, staff have felt Tim's presence watching them, and the smell of brewing beer fills the air.
|
""" Test reading of files not conforming to matlab specification
We try and read any file that matlab reads, these files included
"""
from __future__ import division, print_function, absolute_import
from os.path import dirname, join as pjoin
from numpy.testing import assert_, assert_raises
from scipy.io.matlab.mio import loadmat
TEST_DATA_PATH = pjoin(dirname(__file__), 'data')
def test_multiple_fieldnames():
# Example provided by Dharhas Pothina
# Extracted using mio5.varmats_from_mat
multi_fname = pjoin(TEST_DATA_PATH, 'nasty_duplicate_fieldnames.mat')
vars = loadmat(multi_fname)
funny_names = vars['Summary'].dtype.names
assert_(set(['_1_Station_Q', '_2_Station_Q',
'_3_Station_Q']).issubset(funny_names))
def test_malformed1():
# Example from gh-6072
# Contains malformed header data, which previously resulted into a
# buffer overflow.
#
# Should raise an exception, not segfault
fname = pjoin(TEST_DATA_PATH, 'malformed1.mat')
assert_raises(ValueError, loadmat, fname)
|
Hot stone massage therapy melts away tension, eases muscle stiffness and increases circulation and metabolism. Each hot stone massage therapy session promotes deeper muscle relaxation through the placement of smooth, water-heated stones at key points on the body. Our professional massage therapists also incorporate a customized massage, with the use of hot stones which offers enhanced benefits.
An advanced technique that uses hot stones to loosen tight muscles.
Massage 1 offers many different types of massage therapy to suit the needs of our diverse client base. From the famous Swedish massage to specialty massages such as prenatal, Massage 1 offers a wide range of massage types to choose from based on our clients' needs.
|
from hashlib import sha256
from os.path import exists
from json import JSONEncoder
from time import time
import logging
from logging.handlers import MemoryHandler
from waflib.Task import Task
from waflib.Utils import subprocess, check_dir
from waflib.Logs import debug
from os.path import dirname
#from cStringIO import StringIO
#from waflib import Utils
def logger_json_create(ctx):
logger = logging.getLogger('build.json')
logger.setLevel(logging.INFO)
if ctx.variant == "host":
file = "%s/logs/host.json" % ctx.out_dir
else:
file = "%s/logs/%s.json" % (ctx.out_dir, ctx.variant)
check_dir(dirname(file))
filetarget = logging.FileHandler(file, mode="w")
memoryhandler = MemoryHandler(1048576, target=filetarget)
logger.addHandler(memoryhandler)
return logger
def hash_files(files):
h = []
for file in files:
if exists(file):
fp = open(file, "r")
h.append((file, sha256(fp.read()).hexdigest()))
fp.close()
return h
def exec_command_json(self, cmd, **kw):
# subprocess = Utils.subprocess
kw['shell'] = isinstance(cmd, str)
debug('runner_env: kw=%s' % kw)
try:
record = {}
record["time"] = time()
record["command"] = cmd
recoard["variant"] = ctx.variant
task_self = kw["json_task_self"]
record["type"] = task_self.__class__.__name__
del kw["json_task_self"]
record["inputs"] = [x.srcpath() for x in task_self.inputs]
record["outputs"] = [x.srcpath() for x in task_self.outputs]
record["cflags"] = self.env.CFLAGS
record["cc"] = self.env.CC
kw['stdout'] = kw['stderr'] = subprocess.PIPE
time_start = time()
p = subprocess.Popen(cmd, **kw)
(stdout, stderr) = p.communicate()
record["time_duration"] = time() - time_start
if stdout:
record["stdout"] = stdout
if stderr:
record["stderr"] = stderr
record["hash"] = {}
record["hash"]["inputs"] = hash_files(record["inputs"])
record["hash"]["outputs"] = hash_files(record["outputs"])
record["retval"] = p.returncode
data = JSONEncoder(sort_keys=False, indent=False).encode(record)
self.logger_json.info(data)
return p.returncode
except OSError:
return -1
def exec_command_json_extra(self, cmd, **kw):
kw["json_task_self"] = self
self.exec_command_real(cmd, **kw)
|
steps towards him. "Yes - Louise," she murmured.
"Yes, Raoul," the young girl replied, "I have been waiting for you."
"I beg your pardon. When I came into the room I was not aware - "
other. It was for Louise to speak, and she made an effort to do so.
my motive, Monsieur de Bragelonne."
have of me, I confess - "
Louise, interrupting him with her soft, sweet voice.
he sat, or rather fell down on a chair. "Speak," he said.
few minutes before. Raoul rouse, and went to the door, which he opened.
towards Louise, he added, "Is not that what you wished?"
words, which seemed to signify, "You see that I still understand you."
him unhappy, or might wound his pride." Raoul did not reply.
think, to relate to you, very simply, everything that has befallen me.
wishes to pour itself out at your feet."
had already received; but it was impossible to meet Raoul's eyes.
"He told me you were incensed with me - and justly so, I admit."
all that I came to say."
calmer expression, and the disdainful smile upon his lip passed away.
wrong, and that you would not do it."
were not, alas! any more beside me."
"But you knew where I was, mademoiselle; you could have written to me."
"Raoul, I did not dare to do so. Raoul, I have been weak and cowardly.
which I read in your eyes."
not do me so foul a wrong as to disguise your feelings before me now!
"You are right," she said.
"Oh!" said La Valliere, "I do not ask you so much as that, Raoul."
"Impossible, impossible!" she cried, "you are mocking me."
perhaps I did not love you."
"Oh! you love me like an affectionate brother; let me hope that, Raoul."
husband, with the deepest, the truest, the fondest affection."
of, care for, anything, either in this world or the next."
"Raoul - dear Raoul! spare me, I implore you!" cried La Valliere. "Oh!
if I had but known - "
wretched man living; leave me, I entreat you. Adieu! adieu!"
"Forgive me! oh, forgive me, Raoul, for what I have done."
still?_" She buried her face in her hands.
hands to him in vain.
carried La Valliere, still fainting, to the carriage.
|
# -*- coding: utf-8 -*-
"""Jobs for IAM roles."""
import os
from ..aws.iam import instanceprofile
from .exceptions import FileDoesNotExist
from .exceptions import ImproperlyConfigured
from .exceptions import MissingKey
from .exceptions import ResourceAlreadyExists
from .exceptions import ResourceDoesNotExist
from .exceptions import ResourceNotCreated
from .exceptions import ResourceNotDeleted
from .exceptions import WaitTimedOut
from . import roles as role_jobs
from . import utils
def get_display_name(record):
"""Get the display name for a record.
Args:
record
A record returned by AWS.
Returns:
A display name for the instance profile.
"""
return record["InstanceProfileName"]
def fetch_all(profile):
"""Fetch all instance profiles.
Args:
profile
A profile to connect to AWS with.
Returns:
A list of instance profiles.
"""
params = {}
params["profile"] = profile
response = utils.do_request(instanceprofile, "get", params)
data = utils.get_data("InstanceProfiles", response)
return data
def fetch_by_name(profile, name):
"""Fetch an instance profile by name.
Args:
profile
A profile to connect to AWS with.
name
The name of the instance profile you want to fetch.
Returns:
A list of instance profiles with the provided name.
"""
params = {}
params["profile"] = profile
response = utils.do_request(instanceprofile, "get", params)
data = utils.get_data("InstanceProfiles", response)
result = [x for x in data if x["InstanceProfileName"] == name]
return result
def exists(profile, name):
"""Check if an instance profile exists.
Args:
profile
A profile to connect to AWS with.
name
The name of an instance profile.
Returns:
True if it exists, False if it doesn't.
"""
result = fetch_by_name(profile, name)
return len(result) > 0
def polling_fetch(profile, name, max_attempts=10, wait_interval=1):
"""Try to fetch an instance profile repeatedly until it exists.
Args:
profile
A profile to connect to AWS with.
name
The name of an instance profile.
max_attempts
The max number of times to poll AWS.
wait_interval
How many seconds to wait between each poll.
Returns:
The instance profile's data, or None if it times out.
"""
data = None
count = 0
while count < max_attempts:
data = fetch_by_name(profile, name)
if data:
break
else:
count += 1
sleep(wait_interval)
if not data:
msg = "Timed out waiting for instance profile to be created."
raise WaitTimedOut(msg)
return data
def create(profile, name):
"""Create an instance profile.
Args:
profile
A profile to connect to AWS with.
name
The name you want to give to the instance profile.
Returns:
Info about the newly created instance profile.
"""
# Make sure it doesn't exist already.
if exists(profile, name):
msg = "Instance profile '" + str(name) + "' already exists."
raise ResourceAlreadyExists(msg)
# Now we can create it.
params = {}
params["profile"] = profile
params["name"] = name
response = utils.do_request(instanceprofile, "create", params)
# Check that it exists.
instance_profile_data = polling_fetch(profile, name)
if not instance_profile_data:
msg = "Instance profile '" + str(name) + "' not created."
raise ResourceNotCreated(msg)
# Send back the instance profile's info.
return instance_profile_data
def delete(profile, name):
"""Delete an IAM instance profile.
Args:
profile
A profile to connect to AWS with.
name
The name of the instance profile you want to delete.
"""
# Make sure the instance profile exists.
if not exists(profile, name):
msg = "No instance profile '" + str(name) + "'."
raise ResourceDoesNotExist(msg)
# Now try to delete it.
params = {}
params["profile"] = profile
params["name"] = name
response = utils.do_request(instanceprofile, "delete", params)
# Check that it was, in fact, deleted.
if exists(profile, name):
msg = "The instance profile '" + str(name) + "' was not deleted."
raise ResourceNotDeleted(msg)
def attach(profile, instance_profile, role):
"""Attach an IAM role to an instance profile.
Args:
profile
A profile to connect to AWS with.
instance_profile
The name of an instance profile.
role
The name of a role.
Returns:
The data returned by boto3.
"""
# Make sure the instance profile exists.
if not exists(profile, instance_profile):
msg = "No instance profile '" + str(instance_profile) + "'."
raise ResourceDoesNotExist(msg)
# Make sure the role exists.
if not role_jobs.exists(profile, role):
msg = "No role '" + str(role) + "'."
raise ResourceDoesNotExist(msg)
# Attach the role to the instance profile.
params = {}
params["profile"] = profile
params["instance_profile"] = instance_profile
params["role"] = role
return utils.do_request(instanceprofile, "add_role", params)
def detach(profile, instance_profile, role):
"""Detach an IAM role from an instance profile.
Args:
profile
A profile to connect to AWS with.
instance profile
The name of an instance profile.
role
The name of a role.
Returns:
The data returned by boto3.
"""
# Make sure the instance profile exists.
if not exists(profile, instance_profile):
msg = "No instance profile '" + str(instance_profile) + "'."
raise ResourceDoesNotExist(msg)
# Make sure the role exists.
if not role_jobs.exists(profile, role):
msg = "No role '" + str(role) + "'."
raise ResourceDoesNotExist(msg)
# Detach the role
params = {}
params["profile"] = profile
params["instance_profile"] = instance_profile
params["role"] = role
return utils.do_request(instanceprofile, "remove_role", params)
def is_attached(profile, instance_profile, role):
"""Check if an IAM role is attached to an instance profile.
Args:
profile
A profile to connect to AWS with.
instance_profile
The name of an instance profile.
role
The name of a role.
Returns:
True if it's attached, False if it's not.
"""
# Make sure the instance profile exists.
instance_profile_data = fetch_by_name(profile, instance_profile)
if not instance_profile_data:
msg = "No instance profile '" + str(instance_profile) + "'."
raise ResourceDoesNotExist(msg)
# Make sure the role exists.
if not role_jobs.exists(profile, role):
msg = "No role '" + str(role) + "'."
raise ResourceDoesNotExist(msg)
# Check if the role is attached.
roles = utils.get_data("Roles", instance_profile_data[0])
matching_roles = [x for x in roles if x["RoleName"] == role]
return len(matching_roles) > 0
def is_detached(profile, instance_profile, role):
"""Check if an IAM role is detached from an instance profile.
Args:
profile
A profile to connect to AWS with.
instance_profile
The name of an instance profile.
role
The name of a role.
Returns:
True if it's detached, False if it's not.
"""
# Make sure the instance profile exists.
instance_profile_data = fetch_by_name(profile, instance_profile)
if not instance_profile_data:
msg = "No instance profile '" + str(instance_profile) + "'."
raise ResourceDoesNotExist(msg)
# Make sure the role exists.
if not role_jobs.exists(profile, role):
msg = "No role '" + str(role) + "'."
raise ResourceDoesNotExist(msg)
# Check if the role is detached.
roles = utils.get_data("Roles", instance_profile_data[0])
matching_roles = [x for x in roles if x["RoleName"] == role]
return len(matching_roles) == 0
|
In The Storyteller, German-Lebanese author Pierre Jarawan has created a work of art that explores the importance of family and nationality in the creation of an identity. Jarawan fills the book with stories of the Lebanese civil war, teaching the history of the country and the event without making it feel like you are learning anything or reading a history book. Samir, the main character, is incredibly relatable in his search for understanding of himself even though his circumstances are not something even close to anything I have experienced. Jarawan creates a cast of characters in which I cared about each and every one of them, despite their faults. A definite must read.
Holy crap this book is amazing! I can’t handle the book hangover I am currently experiencing. Hamlet meets drag queens who are also incredibly teenage thieves. What more could you want?
Margo Manning is a rich, LA socialite with a secret. In her spare time, she dresses up in drag with a group of her guy friends and pulls of incredible heists. But while her social life is full of intrigue, her family life is falling apart. Her father is sick and her mom lives in Italy. What is a girl to do?
Roehrig has created amazing characters who you root for the entire time, even when they are doing things that are less than legal. And boy oh boy is this book stormy. I just want someone to look at me the way these characters look at each other. This book is such a delight to read. I can’t even handle how much I love this book. I wish it was longer because I didn’t want it to end. Make sure to pick this one up the second it comes out on January 29th.
I ignored people telling me to read this for WAY too long! So don’t make the same mistake I did! Pick this up immediately! In Greg’s Chicago, there is an underground world filled with Dwarfs and Elves! But Greg doesn’t know that. All he knows is that he has one best friend and a dad who is always off looking for weird natural remedies and teas. But all of that changes when his dad is abducted by a… troll?! This book is a crazy fun adventure filled with a fully thought out magical world set under my very own feet. As well as being fun and entertaining, it’s also a great story of acceptance of all people, regardless of race (even if those races are magical in this book). Greg is a powerful underdog and I wish that I had him around as a role model when I was growing up.
I requested this at the library after seeing it on a bunch of lists, which I don’t usually do because I have so much other stuff to read that I already own. But I am so glad that I did. Scream All Night is a completely unique concept, pulling the reader into the bizarre world of cult horror films. Milman’s writing is great. I can’t count the amount of times I laughed out loud and said “This is SO GOOD!” more times than I can count. While the situations in the book are completely unique, they are somehow also incredibly relatable. I love these characters, this setting, and the writing. I just want more.
Nemesis left me hanging and so I inhaled Genesis. And this book is so much more intense than I ever imagined it could be. I love this series. Set in our real world, Reichs messes with the reality around us and speculates a way in which the world could survive a horrible disaster. If you haven’t read the first one yet, go out and get it as well as Genesis because you will need to read it immediately after.
The characters change and grow in this book more than they did in the first one. Morals are questioned, status is commented on, and everyone is tested. I breezed though this long book because I needed answers. And now I need the next one! What more could possibly happen to these characters?!?
|
# coding: utf-8
# -------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
# --------------------------------------------------------------------------
"""
FILE: sample_semantic_search_async.py
DESCRIPTION:
This sample demonstrates how to use semantic search.
USAGE:
python sample_semantic_search_async.py
Set the environment variables with your own values before running the sample:
1) AZURE_SEARCH_SERVICE_ENDPOINT - the endpoint of your Azure Cognitive Search service
2) AZURE_SEARCH_INDEX_NAME - the name of your search index (e.g. "hotels-sample-index")
3) AZURE_SEARCH_API_KEY - your search API key
"""
import os
import asyncio
async def speller():
# [START speller_async]
from azure.core.credentials import AzureKeyCredential
from azure.search.documents.aio import SearchClient
endpoint = os.getenv("AZURE_SEARCH_SERVICE_ENDPOINT")
index_name = os.getenv("AZURE_SEARCH_INDEX_NAME")
api_key = os.getenv("AZURE_SEARCH_API_KEY")
credential = AzureKeyCredential(api_key)
client = SearchClient(endpoint=endpoint,
index_name=index_name,
credential=credential)
results = await client.search(search_text="luxucy", query_language="en-us", speller="lexicon")
async for result in results:
print("{}\n{}\n)".format(result["HotelId"], result["HotelName"]))
# [END speller_async]
async def semantic_ranking():
# [START semantic_ranking_async]
from azure.core.credentials import AzureKeyCredential
from azure.search.documents import SearchClient
endpoint = os.getenv("AZURE_SEARCH_SERVICE_ENDPOINT")
index_name = os.getenv("AZURE_SEARCH_INDEX_NAME")
api_key = os.getenv("AZURE_SEARCH_API_KEY")
credential = AzureKeyCredential(api_key)
client = SearchClient(endpoint=endpoint,
index_name=index_name,
credential=credential)
results = list(client.search(search_text="luxury", query_type="semantic", query_language="en-us"))
for result in results:
print("{}\n{}\n)".format(result["HotelId"], result["HotelName"]))
# [END semantic_ranking_async]
if __name__ == '__main__':
loop = asyncio.get_event_loop()
loop.run_until_complete(speller())
loop.run_until_complete(semantic_ranking())
|
Located in the centre of the Riviera with the longest tourism tradition in Croatia, the Astoria Design Hotel stands for a successful blend of modern design and imperial style. The hotel is situated in the Kvarner Bay, the point where the Mediterranean cuts deepest into Central Europe, into the foot of the Ucka Mountain.
Each of the 51 air-conditioned guestrooms and suites are decorated in natural tones with clean-lined furniture and equipped with the latest technology, some with a spacious terrace and spectacular sea views.
The charming restaurant with its sunny terrace invites you to enjoy the specialties from the local and international cuisine, whereas the comfortable atmosphere at the bar is an ideal spot for cocktail evenings.
Astoria Design Hotel also features swimming pool where you can take a refreshing dip. You can also check your e-mails or surf on the internet while relaxing in your room or during your afternoon cup of coffee.
|
#
# Copyright (c) 2008-2015 Citrix Systems, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License")
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
from nssrc.com.citrix.netscaler.nitro.resource.base.base_resource import base_resource
from nssrc.com.citrix.netscaler.nitro.resource.base.base_resource import base_response
from nssrc.com.citrix.netscaler.nitro.service.options import options
from nssrc.com.citrix.netscaler.nitro.exception.nitro_exception import nitro_exception
from nssrc.com.citrix.netscaler.nitro.util.nitro_util import nitro_util
class linkset(base_resource) :
""" Configuration for link set resource. """
def __init__(self) :
self._id = ""
self._ifnum = ""
self.___count = 0
@property
def id(self) :
"""Unique identifier for the linkset. Must be of the form LS/x, where x can be an integer from 1 to 32.
"""
try :
return self._id
except Exception as e:
raise e
@id.setter
def id(self, id) :
"""Unique identifier for the linkset. Must be of the form LS/x, where x can be an integer from 1 to 32.
"""
try :
self._id = id
except Exception as e:
raise e
@property
def ifnum(self) :
"""The interfaces to be bound to the linkset.
"""
try :
return self._ifnum
except Exception as e:
raise e
def _get_nitro_response(self, service, response) :
""" converts nitro response into object and returns the object array in case of get request.
"""
try :
result = service.payload_formatter.string_to_resource(linkset_response, response, self.__class__.__name__)
if(result.errorcode != 0) :
if (result.errorcode == 444) :
service.clear_session(self)
if result.severity :
if (result.severity == "ERROR") :
raise nitro_exception(result.errorcode, str(result.message), str(result.severity))
else :
raise nitro_exception(result.errorcode, str(result.message), str(result.severity))
return result.linkset
except Exception as e :
raise e
def _get_object_name(self) :
""" Returns the value of object identifier argument
"""
try :
if (self.id) :
return str(self.id)
return None
except Exception as e :
raise e
@classmethod
def add(cls, client, resource) :
""" Use this API to add linkset.
"""
try :
if type(resource) is not list :
addresource = linkset()
addresource.id = resource.id
return addresource.add_resource(client)
else :
if (resource and len(resource) > 0) :
addresources = [ linkset() for _ in range(len(resource))]
for i in range(len(resource)) :
addresources[i].id = resource[i].id
result = cls.add_bulk_request(client, addresources)
return result
except Exception as e :
raise e
@classmethod
def delete(cls, client, resource) :
""" Use this API to delete linkset.
"""
try :
if type(resource) is not list :
deleteresource = linkset()
if type(resource) != type(deleteresource):
deleteresource.id = resource
else :
deleteresource.id = resource.id
return deleteresource.delete_resource(client)
else :
if type(resource[0]) != cls :
if (resource and len(resource) > 0) :
deleteresources = [ linkset() for _ in range(len(resource))]
for i in range(len(resource)) :
deleteresources[i].id = resource[i]
else :
if (resource and len(resource) > 0) :
deleteresources = [ linkset() for _ in range(len(resource))]
for i in range(len(resource)) :
deleteresources[i].id = resource[i].id
result = cls.delete_bulk_request(client, deleteresources)
return result
except Exception as e :
raise e
@classmethod
def get(cls, client, name="", option_="") :
""" Use this API to fetch all the linkset resources that are configured on netscaler.
"""
try :
if not name :
obj = linkset()
response = obj.get_resources(client, option_)
else :
if type(name) != cls :
if type(name) is not list :
obj = linkset()
obj.id = name
response = obj.get_resource(client, option_)
else :
if name and len(name) > 0 :
response = [linkset() for _ in range(len(name))]
obj = [linkset() for _ in range(len(name))]
for i in range(len(name)) :
obj[i] = linkset()
obj[i].id = name[i]
response[i] = obj[i].get_resource(client, option_)
return response
except Exception as e :
raise e
@classmethod
def get_filtered(cls, client, filter_) :
""" Use this API to fetch filtered set of linkset resources.
filter string should be in JSON format.eg: "port:80,servicetype:HTTP".
"""
try :
obj = linkset()
option_ = options()
option_.filter = filter_
response = obj.getfiltered(client, option_)
return response
except Exception as e :
raise e
@classmethod
def count(cls, client) :
""" Use this API to count the linkset resources configured on NetScaler.
"""
try :
obj = linkset()
option_ = options()
option_.count = True
response = obj.get_resources(client, option_)
if response :
return response[0].__dict__['___count']
return 0
except Exception as e :
raise e
@classmethod
def count_filtered(cls, client, filter_) :
""" Use this API to count filtered the set of linkset resources.
Filter string should be in JSON format.eg: "port:80,servicetype:HTTP".
"""
try :
obj = linkset()
option_ = options()
option_.count = True
option_.filter = filter_
response = obj.getfiltered(client, option_)
if response :
return response[0].__dict__['___count']
return 0
except Exception as e :
raise e
class linkset_response(base_response) :
def __init__(self, length=1) :
self.linkset = []
self.errorcode = 0
self.message = ""
self.severity = ""
self.sessionid = ""
self.linkset = [linkset() for _ in range(length)]
|
The Scovill Manufacturing Company of Waterbury, Connecticut produced this medal during the middle of the 20th century. The Scovill Company was established in 1802 as a button manufacturer and is still in business today. Scovill was an early industrial American innovator, adapting armory manufacturing processes to mass-produce a variety of consumer goods including buttons, daguerreotype mats, and medals.
Obverse: Bust image of Clark Gable, facing three quarters to the left. The legend reads: CLARK GABLE.
Reverse: An inscription that reads METRO GOLDWYN MAYER STAR.
|
# -*- coding: utf-8 -*-
import json
from ..base.decrypter import BaseDecrypter
class YoutubeComFolder(BaseDecrypter):
__name__ = "YoutubeComFolder"
__type__ = "decrypter"
__version__ = "1.11"
__status__ = "testing"
__pattern__ = r"https?://(?:www\.|m\.)?youtube\.com/(?P<TYPE>user|playlist|view_play_list)(/|.*?[?&](?:list|p)=)(?P<ID>[\w\-]+)"
__config__ = [
("enabled", "bool", "Activated", True),
("use_premium", "bool", "Use premium account if available", True),
(
"folder_per_package",
"Default;Yes;No",
"Create folder for each package",
"Default",
),
("likes", "bool", "Grab user (channel) liked videos", False),
("favorites", "bool", "Grab user (channel) favorite videos", False),
("uploads", "bool", "Grab channel unplaylisted videos", True),
]
__description__ = """Youtube.com channel & playlist decrypter plugin"""
__license__ = "GPLv3"
__authors__ = [("Walter Purcaro", "vuolter@gmail.com")]
API_KEY = "AIzaSyAcA9c4evtwSY1ifuvzo6HKBkeot5Bk_U4"
def api_response(self, method, **kwargs):
kwargs['key'] = self.API_KEY
json_data = self.load("https://www.googleapis.com/youtube/v3/" + method, get=kwargs)
return json.loads(json_data)
def get_channel(self, user):
channels = self.api_response("channels",
part="id,snippet,contentDetails",
forUsername=user,
maxResults=50)
if channels['items']:
channel = channels['items'][0]
return {'id': channel['id'],
'title': channel['snippet']['title'],
'relatedPlaylists': channel['contentDetails']['relatedPlaylists'],
'user': user} #: One lone channel for user?
def get_playlist(self, playlist_id):
playlists = self.api_response("playlists",
part="snippet",
id=playlist_id)
if playlists['items']:
playlist = playlists['items'][0]
return {'id': playlist_id,
'title': playlist['snippet']['title'],
'channelId': playlist['snippet']['channelId'],
'channelTitle': playlist['snippet']['channelTitle']}
def _get_playlists(self, playlist_id, token=None):
if token:
playlists = self.api_response("playlists",
part="id",
maxResults=50,
channelId=playlist_id,
pageToken=token)
else:
playlists = self.api_response("playlists",
part="id",
maxResults=50,
channelId=playlist_id)
for playlist in playlists['items']:
yield playlist['id']
if "nextPageToken" in playlists:
for item in self._get_playlists(playlist_id, playlists['nextPageToken']):
yield item
def get_playlists(self, ch_id):
return [self.get_playlist(p_id) for p_id in self._get_playlists(ch_id)]
def _get_videos_id(self, playlist_id, token=None):
if token:
playlist = self.api_response("playlistItems",
part="contentDetails",
maxResults=50,
playlistId=playlist_id,
pageToken=token)
else:
playlist = self.api_response("playlistItems",
part="contentDetails",
maxResults=50,
playlistId=playlist_id)
for item in playlist["items"]:
yield item["contentDetails"]["videoId"]
if "nextPageToken" in playlist:
for item in self._get_videos_id(playlist_id, playlist["nextPageToken"]):
yield item
def get_videos_id(self, p_id):
return list(self._get_videos_id(p_id))
def decrypt(self, pyfile):
if self.info["pattern"]["TYPE"] == "user":
self.log_debug("Url recognized as Channel")
channel = self.get_channel(self.info["pattern"]["ID"])
if channel:
playlists = self.get_playlists(channel["id"])
self.log_debug(
r'{} playlists found on channel "{}"'.format(
len(playlists), channel["title"]
)
)
related_playlist = {
p_name: self.get_playlist(p_id)
for p_name, p_id in channel["relatedPlaylists"].items()
}
self.log_debug(
"Channel's related playlists found = {}".format(
list(related_playlist.keys())
)
)
related_playlist["uploads"]["title"] = "Unplaylisted videos"
related_playlist["uploads"]["checkDups"] = True #: checkDups flag
for p_name, p_data in related_playlist.items():
if self.config.get(p_name):
p_data["title"] += " of " + channel["user"]
playlists.append(p_data)
else:
playlists = []
else:
self.log_debug("Url recognized as Playlist")
playlists = [self.get_playlist(self.info["pattern"]["ID"])]
if not playlists:
self.fail(self._("No playlist available"))
added_videos = []
urlize = lambda x: "https://www.youtube.com/watch?v=" + x
for p in playlists:
p_name = p["title"]
p_videos = self.get_videos_id(p["id"])
self.log_debug(
r'{} videos found on playlist "{}"'.format(len(p_videos), p_name)
)
if not p_videos:
continue
elif "checkDups" in p:
p_urls = [urlize(v_id) for v_id in p_videos if v_id not in added_videos]
self.log_debug(
r'{} videos available on playlist "{}" after duplicates cleanup'.format(
len(p_urls), p_name
)
)
else:
p_urls = [urlize(url) for url in p_videos]
#: Folder is NOT recognized by pyload 0.5.0!
self.packages.append((p_name, p_urls, p_name))
added_videos.extend(p_videos)
|
Bethlehem Steel Company, Shipbuilding Division, Quincy, Mass.
Stott TE. Discussion: “Recent Operating Experience With British Naval Gas Turbines” (Trewby, G. F. A., 1963, ASME J. Eng. Power, 85, pp. 46–67). ASME. J. Eng. Power. 1963;85(1):69. doi:10.1115/1.3675223.
|
# :coding: utf-8
# :copyright: Copyright (c) 2013 Martin Pengelly-Phillips
# :license: See LICENSE.txt.
from ..error import SchemaConflictError
class Collection(object):
'''Store registered schemas.'''
def __init__(self, schemas=None):
'''Initialise collection with *schemas*.'''
self._schemas = {}
if schemas is not None:
for schema in schemas:
self.add(schema)
def add(self, schema):
'''Add *schema*.
Raise SchemaConflictError if a schema with the same id already exists.
'''
schema_id = schema['id']
try:
self.get(schema_id)
except KeyError:
self._schemas[schema_id] = schema
else:
raise SchemaConflictError('A schema is already registered with '
'id {0}'.format(schema_id))
def remove(self, schema_id):
'''Remove a schema with *schema_id*.'''
try:
self._schemas.pop(schema_id)
except KeyError:
raise KeyError('No schema found with id {0}'.format(schema_id))
def clear(self):
'''Remove all registered schemas.'''
self._schemas.clear()
def get(self, schema_id):
'''Return schema registered with *schema_id*.
Raise KeyError if no schema with *schema_id* registered.
'''
try:
schema = self._schemas[schema_id]
except KeyError:
raise KeyError('No schema found with id {0}'.format(schema_id))
else:
return schema
def items(self):
'''Yield (id, schema) pairs.'''
for schema in self:
yield (schema['id'], schema)
def __iter__(self):
'''Iterate over registered schemas.'''
for schema_id in self._schemas:
yield self.get(schema_id)
|
“Turkey and Europe need each other,” Dimitris Avramopoulos, commissioner for migration, home affairs and citizenship, told Greece's Delphi Economic Forum.
Avramopoulos stressed that Turkey has fulfilled many criteria for visa liberalization under the 2016 migrant agreement.
Under the pact, Turkey agreed to take stricter measures against human smugglers and discourage irregular migration through the Aegean Sea, while the EU pledged visa-free travel for Turkish nationals within the Schengen area, provided that Ankara fulfills criteria set out by Brussels.
He said Turkey’s membership would be “beneficial for both EU and Turkey” and have “a positive impact on Greece”.
FETÖ and its U.S.- based leader Fetullah Gülen orchestrated the defeated coup of July 15, 2016, which left 251 people dead and nearly 2,200 injured.
|
from glob import glob
import json
import os
from urlparse import urlparse
from twisted.internet.endpoints import serverFromString
from twisted.internet import reactor as default_reactor
from twisted.web.server import Site
from twisted.python import log
from txredisapi import Connection
from .web import PortiaWebServer
from .protocol import JsonProtocolFactory
from .exceptions import PortiaException
def start_redis(redis_uri='redis://localhost:6379/1'):
try:
url = urlparse(redis_uri)
except (AttributeError, TypeError):
raise PortiaException('Invalid url: %s.' % (redis_uri,))
if not url.hostname:
raise PortiaException('Missing Redis hostname.')
try:
int(url.path[1:])
except (IndexError, ValueError):
raise PortiaException('Invalid Redis db index.')
return Connection(url.hostname, int(url.port or 6379),
dbid=int(url.path[1:]))
def start_webserver(portia, endpoint_str, cors=None, reactor=default_reactor):
endpoint = serverFromString(reactor, str(endpoint_str))
return endpoint.listen(
Site(PortiaWebServer(portia, cors=cors).app.resource()))
def start_tcpserver(portia, endpoint_str, reactor=default_reactor):
endpoint = serverFromString(reactor, str(endpoint_str))
return endpoint.listen(JsonProtocolFactory(portia))
def compile_network_prefix_mappings(glob_paths):
mapping = {}
for glob_path in glob_paths:
for mapping_file in glob(glob_path):
if not os.path.isfile(mapping_file):
continue
log.msg('Loading mapping file: %s.' % (mapping_file,))
with open(mapping_file) as fp:
mapping.update(json.load(fp))
return mapping
|
When is the Best Time for Posting on Social Media?
Everyone knows that posting to social media is a great way to engage with your target audience and market your brand and product. But when exactly is the best time to post?
The bulk of the research about Facebook engagement has found that posting between the hours of 12 noon and 4 pm will bring you the highest amount of views, clicks and comments. This is interesting, largely because these hours coincide with the peak of the workday, but let’s not judge those on Facebook during work and instead post content to try and connect with them.
The second-best time to post is 9 am. Presumably this is because people are just getting into work and will log into social media as a way to reset their brains and start the day.
But remember: not all days are equal. Wednesday, Thursday and Friday are by far the best days to post, with Friday the best of the three. People are ready for the weekend and feeling good, and this leads to much higher engagement.
Although it seems a bit strange, the best time to post to Instagram is actually 2 am. You might not think that a lot of people are on Instagram at this type of night, but that’s not really the point. Because so many people post during the day, sending out content in the wee hours gives your stuff a better chance at standing out. You may get your post in front of fewer eyes, but you have a higher chance of getting a response out of people at this time.
And if you do this well, by the time morning rolls around, your post will be performing, and this will put it at the top of people’s feeds, meaning even more exposure and engagement.
Wednesday seems to be the best day of the week, but engagement levels are high for nearly every day except the weekends and Tuesday.
To maximise the use of Twitter, the best time to post is 12 noon. Because Twitter is so good for giving out bite-sized pieces of content, it’s a great way for people to take a break from their fast-paced days. The second-best time to post is between 5-6 pm, which makes sense for the same reason. People are getting off of work and are looking to disconnect.
However, because of this very same reason, you should try to avoid posting on Fridays. Every other day of the week outperforms Friday, largely because people have other ways of getting their minds off work.
The best time for LinkedIn engagement is right when people get out of work. Usually people get notifications to update their profile, or to congratulate someone on their new job, during the day, and then they take time after work to respond. Engagement is also pretty high during lunch time.
So, there you have it. Of course, things will change a bit depending on the time zone you’re in and also on the lifestyle habits of your target audience. But if you keep to this schedule, you should be able to improve engagement on social media and make more use of these valuable marketing tools.
|
#!/usr/bin/python
# -*- coding: utf-8; -*-
# Copyright [2013] [Robert Allen]
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import requests
import os
import logging
import subprocess
def fetch_repos(org, token, cwd="./"):
"""
Collects all repos and iterates to update of clone as required
:param org str:
:param token str:
:param cwd str:
"""
uri = "https://api.github.com/orgs/{0}/repos".format(org)
headers = {}
if token is not None:
headers["Authorization"] = "token {0}".format(token)
i = 0
try:
r = requests.get(uri, headers=headers)
if r.status_code != 200:
raise requests.HTTPError("unsuccessful request made to %s" % uri)
result = r.json()
for repo in result:
i += 1
if os.path.exists('%s/%s' % (cwd, repo['name'])):
git_update_mirror(repo=repo, cwd=cwd)
else:
git_clone_project(repo=repo, cwd=cwd)
except OSError as error:
logging.exception(error)
return i
def git_update_mirror(repo, cwd):
"""
Updates the project based on the information from the repo dict
:param repo dict:
:param cwd str:
"""
args = ["git", "remote", "update", "-q"]
path = "%s/%s" % (cwd, repo['name'])
logging.info("updating %s" % (repo['full_name']))
subprocess.Popen(args, cwd=path)
def git_clone_project(repo, cwd):
"""
Clones a new project based on the repo dic
:param repo dict:
:param cwd str:
"""
args = ["git", "clone", "-q", "--mirror", repo['ssh_url'], repo['name']]
path = "%s" % cwd
logging.info("cloning %s to %s" % (repo['ssh_url'], repo['name']))
subprocess.Popen(args, cwd=path)
def main():
from argparse import ArgumentParser
parser = ArgumentParser('GitHub Organization Repository Mirroring Tool')
parser.add_argument('--loglevel', type=str, choices=['CRITICAL', 'ERROR', 'WARNING', 'INFO', 'DEBUG'],
help='Available levels are CRITICAL, ERROR, WARNING, INFO, DEBUG',
default="INFO")
parser.add_argument('-d', '--directory', type=str, default=os.environ['HOME'],
help='The directory/path to mirror the repositories into, defaults to the user home.')
parser.add_argument('-t', '--token', type=str, required=True,
help='The github oauth token authorized to pull from the repositories.')
parser.add_argument('-o', '--organisation', type=str, required=True,
help='The Organisation name that owns the projects to mirror')
options = parser.parse_args()
log_level = getattr(logging, options.loglevel)
logging.basicConfig(level=log_level, format='%(message)s')
logging.info('Starting up...')
count = fetch_repos(org=options.organisation, token=options.token, cwd=options.directory)
logging.info("Run Complete [%s] repositories found..." % count)
if __name__ == "__main__":
main()
|
Department of Anesthesiology uses the latest technology and proven techniques to maximize comfort and safety—before, during, and after any procedure. This department is exceptionally dedicated to the highest standards of their specialty. The department handles five operating rooms in the major complex, one obstetric OT and one emergency OT we cater to a range of specialties and super specialties from General Surgery to minimally invasive laparoscopic procedures. The department assesses patient suitability and risks for surgery (PAC), and involved in the anesthetic care of patients prior to, during and after surgery. This department has seven state of the art drager anesthesia work stations. We have nine bedded post anesthesia care unit with comprehensive monitoring facilities. We also provide acute and chronic pain management services inclusive of labor analgesia. Doctors of the department are members of the code blue team and also provide resuscitation services in all areas of the hospital.
Qualifications: MBBS, DNB(Anesthesiology) Dr. AjithRaghavan, Senior anesthesiologist at Ahalia Hospital Mussafah has 14 years of vast experience in Anesthesiology. Expert care through all stages of a patient’s treatment, from preoperative consultations to post-operative care and pain management specially trained to address the unique needs of children.
|
# -*- coding: utf-8 -*-
'''
Created on Aug 26, 2013
Copyright 2012 Root the Box
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
------------------------------------------------------------------------------
This file wraps the Python scripted game setup API.
It reads an XML file(s) and calls the API based on the it's contents.
'''
import os
import logging
import defusedxml.cElementTree as ET
# We have to import all of the classes to avoid mapper errors
from setup.create_database import *
from models import dbsession
def get_child_by_tag(elem, tag_name):
''' Return child elements with a given tag '''
tags = filter(
lambda child: child.tag == tag_name, elem.getchildren()
)
return tags[0] if 0 < len(tags) else None
def get_child_text(elem, tag_name):
''' Shorthand access to .text data '''
return get_child_by_tag(elem, tag_name).text
def create_levels(levels):
''' Create GameLevel objects based on XML data '''
logging.info("Found %s game level(s)" % levels.get('count'))
for index, level_elem in enumerate(levels.getchildren()):
# GameLevel 0 is created automatically by the bootstrap
if get_child_text(level_elem, 'number') != '0':
try:
number = get_child_text(level_elem, 'number')
if GameLevel.by_number(number) is None:
game_level = GameLevel()
game_level.number = number
game_level.buyout = get_child_text(level_elem, 'buyout')
dbsession.add(game_level)
else:
logging.info("GameLevel %d already exists, skipping" % number)
except:
logging.exception("Failed to import game level #%d" % (index + 1))
dbsession.flush()
game_levels = GameLevel.all()
for index, game_level in enumerate(game_levels):
if index + 1 < len(game_levels):
game_level.next_level_id = game_levels[index + 1].id
logging.info("%r -> %r" % (game_level, game_levels[index + 1]))
dbsession.add(game_level)
dbsession.commit()
def create_hints(parent, box):
''' Create flag objects for a box '''
logging.info("Found %s hint(s)" % parent.get('count'))
for index, hint_elem in enumerate(parent.getchildren()):
try:
hint = Hint(box_id=box.id)
hint.price = get_child_text(hint_elem, 'price')
hint.description = get_child_text(hint_elem, 'description')
dbsession.add(hint)
except:
logging.exception("Failed to import hint #%d" % (index + 1))
def create_flags(parent, box):
''' Create flag objects for a box '''
logging.info("Found %s flag(s)" % parent.get('count'))
for index, flag_elem in enumerate(parent.getchildren()):
try:
name = get_child_text(flag_elem, 'name')
flag = Flag(box_id=box.id)
flag.name = name
flag.token = get_child_text(flag_elem, 'token')
flag.value = get_child_text(flag_elem, 'value')
flag.description = get_child_text(flag_elem, 'description')
flag.capture_message = get_child_text(flag_elem, 'capture_message')
flag.type = flag_elem.get('type')
dbsession.add(flag)
except:
logging.exception("Failed to import flag #%d" % (index + 1))
def create_boxes(parent, corporation):
''' Create boxes for a corporation '''
logging.info("Found %s boxes" % parent.get('count'))
for index, box_elem in enumerate(parent.getchildren()):
try:
name = get_child_text(box_elem, 'name')
game_level = GameLevel.by_number(box_elem.get('gamelevel'))
if game_level is None:
logging.warning("GameLevel does not exist for box %s, skipping" % name)
elif Box.by_name(name) is None:
box = Box(corporation_id=corporation.id)
box.name = name
box.game_level_id = game_level.id
box.difficulty = get_child_text(box_elem, 'difficulty')
box.description = get_child_text(box_elem, 'description')
box.operating_system = get_child_text(box_elem, 'operatingsystem')
box.avatar = get_child_text(box_elem, 'avatar').decode('base64')
box.garbage = get_child_text(box_elem, 'garbage')
dbsession.add(box)
dbsession.flush()
create_flags(get_child_by_tag(box_elem, 'flags'), box)
create_hints(get_child_by_tag(box_elem, 'hints'), box)
else:
logging.info("Box with name %s already exists, skipping" % name)
except:
logging.exception("Failed to import box %d" % (index + 1))
def create_corps(corps):
''' Create Corporation objects based on XML data '''
logging.info("Found %s corporation(s)" % corps.get('count'))
for index, corp_elem in enumerate(corps):
try:
corporation = Corporation()
corporation.name = get_child_text(corp_elem, 'name')
dbsession.add(corporation)
dbsession.flush()
create_boxes(get_child_by_tag(corp_elem, 'boxes'), corporation)
except:
logging.exception("Faild to create corporation #%d" % (index + 1))
def _xml_file_import(filename):
''' Parse and import a single XML file '''
logging.debug("Processing: %s" % filename)
try:
tree = ET.parse(filename)
xml_root = tree.getroot()
levels = get_child_by_tag(xml_root, "gamelevels")
create_levels(levels)
corporations = get_child_by_tag(xml_root, "corporations")
create_corps(corporations)
logging.debug("Done processing: %s" % filename)
dbsession.commit()
return True
except:
dbsession.rollback()
logging.exception("Exception raised while parsing %s, rolling back changes" % filename)
return False
def import_xml(target):
''' Import XML file or directory of files '''
target = os.path.abspath(target)
if not os.path.exists(target):
logging.error("Error: Target does not exist (%s) " % target)
elif os.path.isdir(target):
# Import any .xml files in the target directory
logging.debug("%s is a directory ..." % target)
ls = filter(lambda fname: fname.lower().endswith('.xml'), os.listdir(target))
logging.debug("Found %d XML file(s) ..." % len(ls))
results = [_xml_file_import(target + '/' + fxml) for fxml in ls]
return False not in results
else:
# Import a single file
return _xml_file_import(target)
|
In response to litigation by the National Fair Housing Alliance (NFHA), the American Civil Liberties Union (ACLU), the Communication Workers of America (CWA) and other parties, Facebook is making changes to its ad policy to no longer discrimination based on age, gender or zip code.
“Housing, employment and credit ads are crucial to helping people buy new homes, start great careers, and gain access to credit. They should never be used to exclude or harm people,” said Sandberg.
|
# coding=utf-8
# Author: Nic Wolfe <nic@wolfeden.ca>
# URL: https://sickchill.github.io
#
# This file is part of SickChill.
#
# SickChill is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# SickChill is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with SickChill. If not, see <http://www.gnu.org/licenses/>.
from __future__ import absolute_import, print_function, unicode_literals
# Stdlib Imports
import fnmatch
import os
import re
# Third Party Imports
import six
# First Party Imports
import sickbeard
from sickchill.helper.encoding import ek
# Local Folder Imports
from . import common, logger
from .name_parser.parser import InvalidNameException, InvalidShowException, NameParser
from .scene_exceptions import get_scene_exceptions
resultFilters = {
"sub(bed|ed|pack|s)",
"(dir|sub|nfo)fix",
"(?<!shomin.)sample",
"(dvd)?extras",
"dub(bed)?"
}
if hasattr('General', 'ignored_subs_list') and sickbeard.IGNORED_SUBS_LIST:
resultFilters.add("(" + sickbeard.IGNORED_SUBS_LIST.replace(",", "|") + ")sub(bed|ed|s)?")
def containsAtLeastOneWord(name, words):
"""
Filters out results based on filter_words
name: name to check
words : string of words separated by a ',' or list of words
Returns: False if the name doesn't contain any word of words list, or the found word from the list.
"""
if isinstance(words, six.string_types):
words = words.split(',')
words = {word.strip() for word in words if word.strip()}
if not any(words):
return True
for word, regexp in six.iteritems(
{word: re.compile(r'(^|[\W_]){0}($|[\W_])'.format(re.escape(word)), re.I) for word in words}
):
if regexp.search(name):
return word
return False
def filter_bad_releases(name, parse=True, show=None):
"""
Filters out non-english and just all-around stupid releases by comparing them
to the resultFilters contents.
name: the release name to check
Returns: True if the release name is OK, False if it's bad.
"""
try:
if parse:
NameParser().parse(name)
except InvalidNameException as error:
logger.log("{0}".format(error), logger.DEBUG)
return False
except InvalidShowException:
pass
# except InvalidShowException as error:
# logger.log(u"{0}".format(error), logger.DEBUG)
# return False
def clean_set(words):
return {x.strip() for x in set((words or '').lower().split(',')) if x.strip()}
# if any of the bad strings are in the name then say no
ignore_words = resultFilters
ignore_words = ignore_words.union(clean_set(show and show.rls_ignore_words or '')) # Show specific ignored words
ignore_words = ignore_words.union(clean_set(sickbeard.IGNORE_WORDS)) # Plus Global ignored words
ignore_words = ignore_words.difference(clean_set(show and show.rls_require_words or '')) # Minus show specific required words
if sickbeard.REQUIRE_WORDS and not (show and show.rls_ignore_words): # Only remove global require words from the list if we arent using show ignore words
ignore_words = ignore_words.difference(clean_set(sickbeard.REQUIRE_WORDS))
word = containsAtLeastOneWord(name, ignore_words)
if word:
logger.log("Release: " + name + " contains " + word + ", ignoring it", logger.INFO)
return False
# if any of the good strings aren't in the name then say no
require_words = set()
require_words = require_words.union(clean_set(show and show.rls_require_words or '')) # Show specific required words
require_words = require_words.union(clean_set(sickbeard.REQUIRE_WORDS)) # Plus Global required words
require_words = require_words.difference(clean_set(show and show.rls_ignore_words or '')) # Minus show specific ignored words
if sickbeard.IGNORE_WORDS and not (show and show.rls_require_words): # Only remove global ignore words from the list if we arent using show require words
require_words = require_words.difference(clean_set(sickbeard.IGNORE_WORDS))
if require_words and not containsAtLeastOneWord(name, require_words):
logger.log("Release: " + name + " doesn't contain any of " + ', '.join(set(require_words)) +
", ignoring it", logger.INFO)
return False
return True
def allPossibleShowNames(show, season=-1):
"""
Figures out every possible variation of the name for a particular show. Includes TVDB name, TVRage name,
country codes on the end, eg. "Show Name (AU)", and any scene exception names.
show: a TVShow object that we should get the names of
Returns: a list of all the possible show names
"""
showNames = get_scene_exceptions(show.indexerid, season=season)
if not showNames: # if we dont have any season specific exceptions fallback to generic exceptions
season = -1
showNames = get_scene_exceptions(show.indexerid, season=season)
showNames.append(show.name)
if not show.is_anime:
newShowNames = []
country_list = common.countryList
country_list.update(dict(zip(common.countryList.values(), common.countryList.keys())))
for curName in set(showNames):
if not curName:
continue
# if we have "Show Name Australia" or "Show Name (Australia)" this will add "Show Name (AU)" for
# any countries defined in common.countryList
# (and vice versa)
for curCountry in country_list:
if curName.endswith(' ' + curCountry):
newShowNames.append(curName.replace(' ' + curCountry, ' (' + country_list[curCountry] + ')'))
elif curName.endswith(' (' + curCountry + ')'):
newShowNames.append(curName.replace(' (' + curCountry + ')', ' (' + country_list[curCountry] + ')'))
# # if we have "Show Name (2013)" this will strip the (2013) show year from the show name
# newShowNames.append(re.sub('\(\d{4}\)', '', curName))
showNames += newShowNames
return set(showNames)
def determineReleaseName(dir_name=None, nzb_name=None):
"""Determine a release name from an nzb and/or folder name"""
if nzb_name is not None:
logger.log("Using nzb_name for release name.")
return nzb_name.rpartition('.')[0]
if dir_name is None:
return None
# try to get the release name from nzb/nfo
file_types = ["*.nzb", "*.nfo"]
for search in file_types:
reg_expr = re.compile(fnmatch.translate(search), re.I)
files = [file_name for file_name in ek(os.listdir, dir_name) if
ek(os.path.isfile, ek(os.path.join, dir_name, file_name))]
results = [f for f in files if reg_expr.search(f)]
if len(results) == 1:
found_file = ek(os.path.basename, results[0])
found_file = found_file.rpartition('.')[0]
if filter_bad_releases(found_file):
logger.log("Release name (" + found_file + ") found from file (" + results[0] + ")")
return found_file.rpartition('.')[0]
# If that fails, we try the folder
folder = ek(os.path.basename, dir_name)
if filter_bad_releases(folder):
# NOTE: Multiple failed downloads will change the folder name.
# (e.g., appending #s)
# Should we handle that?
logger.log("Folder name (" + folder + ") appears to be a valid release name. Using it.", logger.DEBUG)
return folder
return None
def hasPreferredWords(name, show=None):
"""Determine based on the full episode (file)name combined with the preferred words what the weight its preference should be"""
name = name.lower()
def clean_set(words):
weighted_words = []
words = words.lower().strip().split(',')
val = len(words)
for word in words:
weighted_words.append({"word": word, "weight": val})
val = val - 1
return weighted_words
prefer_words = []
## Because we weigh values, we can not union global and show based values, so we don't do that
if sickbeard.PREFER_WORDS:
prefer_words = clean_set(sickbeard.PREFER_WORDS)
if show and show.rls_prefer_words:
prefer_words = clean_set(show.rls_prefer_words or '')
## if nothing set, return position 0
if len(prefer_words) <= 0:
return 0
value = 0
for word_pair in prefer_words:
if word_pair['weight'] > value and word_pair['word'] in name:
value = word_pair['weight']
return value
|
Our garden apartment has a separate entry. With its 45m² it is the smallest of the three apartments. The two rooms plus kitchen and bathroom are all tiled. Although it is located on ground floor the windows have a regular size and height.
The living room (12m²) is equipped with a sofa bed for two (160 x 200 cm), a modern TV and a glass cabinet.
The kitchen measures 10m² and is equipped with modern furniture. The Table offers seating for four persons and is retractable. ²). The kitchen is equipped with: Refrigerator, dishwasher, kettle, toaster, hob and oven and a coffee machine.
There are two beds (80 x 200 cm) in the bedroom and a cupboard.
The Bathroom has a large shower, towels and hairdryer are available.
|
"""
Django settings for urleater project.
For more information on this file, see
https://docs.djangoproject.com/en/1.6/topics/settings/
For the full list of settings and their values, see
https://docs.djangoproject.com/en/1.6/ref/settings/
"""
# Build paths inside the project like this: os.path.join(BASE_DIR, ...)
import os
BASE_DIR = os.path.dirname(os.path.dirname(__file__))
# Quick-start development settings - unsuitable for production
# See https://docs.djangoproject.com/en/1.6/howto/deployment/checklist/
# SECURITY WARNING: keep the secret key used in production secret!
SECRET_KEY = 'ka$u7fm!iu&r2%gn8)01p@&)3xs=s4$t4zck%=$r&!!gcb$!-e'
# SECURITY WARNING: don't run with debug turned on in production!
DEBUG = True
TEMPLATE_DEBUG = True
ALLOWED_HOSTS = []
# Application definition
INSTALLED_APPS = (
'django_admin_bootstrapped.bootstrap3',
'django_admin_bootstrapped',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'south',
'app',
)
MIDDLEWARE_CLASSES = (
'django.contrib.sessions.middleware.SessionMiddleware',
'django.middleware.common.CommonMiddleware',
'django.middleware.csrf.CsrfViewMiddleware',
'django.contrib.auth.middleware.AuthenticationMiddleware',
'django.contrib.messages.middleware.MessageMiddleware',
'django.middleware.clickjacking.XFrameOptionsMiddleware',
)
ROOT_URLCONF = 'urleater.urls'
WSGI_APPLICATION = 'urleater.wsgi.application'
# Database
# https://docs.djangoproject.com/en/1.6/ref/settings/#databases
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'testapp',
'USER': 'testapp',
'PASSWORD': 'test',
'HOST': '',
'PORT': '',
}
}
# Internationalization
# https://docs.djangoproject.com/en/1.6/topics/i18n/
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = True
USE_L10N = True
USE_TZ = True
# Static files (CSS, JavaScript, Images)
# https://docs.djangoproject.com/en/1.6/howto/static-files/
STATIC_URL = '/static/'
#TEMPLATE_DIRS = (
# # Put strings here, like "/home/html/django_templates"
# # or "C:/www/django/templates".
# # Always use forward slashes, even on Windows.
# # Don't forget to use absolute paths, not relative paths.
# os.path.join(BASE_DIR, '../templates'),
#)
STATICFILES_DIRS = (
os.path.join(BASE_DIR, "vendors"),
os.path.join(BASE_DIR, "static"),)
|
As per the local folklore, Sridhar a Brahmin and a devotee of Lord Venkateswara used to live in Tirumala Hills. Padmavati, a Gandharva Kanya used to play a musical instrument Veena in front of the Lord in the Sanctum Sanctorum. Once Sridhara got attracted to Padmavati and proposed to marry her. Padmavati rejected the proposal for which he got angered and cursed the lady. The lady, in turn, cursed him. To get rid of the Curse, they both prayed for the Lord and Lord told them that Padmavati will take the form of a River near to river Godavari and Sridhara will be born to a Brahmin couple in Karnataka.
AsKarnataka, Padmavati took the form of a river near Godavari and Sridhara after his pilgrimage reached the banks of Padmavati river. Lord appeared in the dreams of Sridhara and revealed his whereabouts. Sridhara recovered the Idol from Asvadda Tree which is on the west to the river Padmavati. On installing the Idol and performing Abhishekam with the water from the river Padmavati, they both relieved from the curse.
|
# coding:utf-8
import sys
sys.path.append("..")
import tornado.ioloop
import tornado.web
import fanserve as fans
class MyTornadoFans(fans.Tornado):
app_secret = 'appsecretEnF5leY4V'
def receive_text(self, text):
if text == u'文章':
print('receive wz')
self.reply_articles([
{
"display_name": "两个故事",
"summary": "今天讲两个故事,分享给你。谁是公司?谁又是中国人?",
"image": "http://storage.mcp.weibo.cn/0JlIv.jpg",
"url": "http://e.weibo.com/mediaprofile/article/detail?uid=1722052204&aid=983319"
}
])
else:
print('receive text')
self.reply_text(text)
def receive_event(self, event):
self.reply_text('event: ' + event)
def receive_default(self, data):
self.reply_text('发送『文章』,将返回文章;发送其他文字将原文返回。')
class MainHandler(tornado.web.RequestHandler):
def get(self):
MyTornadoFans(context=self).get()
def post(self):
MyTornadoFans(context=self).post()
application = tornado.web.Application([
(r"/weibo/", MainHandler),
])
if __name__ == "__main__":
application.listen(8888)
tornado.ioloop.IOLoop.instance().start()
|
The 100 days project that I took part in is well and truly over. The project was a huge learning curve for me, I didn't intend to make it entirely bird themed at the beginning, my intention was to create 100 abstract patterns. But there are so may talented pattern and surface designers at the moment that create really inspirational abstract pattern-making, so I wanted to do something a little different. Ofcourse birds are my inspiration for everything so I decided to make my entire project of patterns filled to the brim with birds. Throughout the project I tried to branch out from my usual colours, (well, my usual colours are black and white) but tried to mix the colours up a little. I tried adding branches and other elements to add context to the patterns and also decided to focus on native birds, which of course are close to my heart and my true inspiration.
In the middle of winter it was hard to find the motivation to get off the couch and go to my cold studio to scan in my drawings, so I started to create some patterns on the laptop in the warmth of the lounge. I felt like I was cheating by not hand-drawing everything, but I actually found it to be quite beneficial using the computer because I started to get an understanding of how to create seamless patterns. After all, I was setting my own brief and when I realised that I could potentially use these patterns for real life applications it would help to have them in digital format.
At the conclusion of the project I feel like I would love to continue to explore pattern making and surface design. If I were to use the patterns for anything I would like to re-draw them, as some have some potential to become something special but unfortunately were a bit rushed at the time.
The project ended on the 18th of October with an exhibition of the Wellingtonian 100 dayers at Re:Space gallery on Victoria Street. The gallery owner decided to display my project (printed out onto postcard size card) on a couple of lovely wooden benches. Most other projects were hung on the wall and looked really amazing, but it was nice to have a kind of interactive display and something people can get close to.
|
#
# (C) Copyright 2003-2010 Jacek Konieczny <jajcus@jajcus.net>
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License Version
# 2.1 as published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public
# License along with this program; if not, write to the Free Software
# Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
#
"""Handling of XMPP stanzas.
Normative reference:
- `RFC 3920 <http://www.ietf.org/rfc/rfc3920.txt>`__
"""
__docformat__="restructuredtext en"
import libxml2
import logging
import threading
from pyxmpp.expdict import ExpiringDictionary
from pyxmpp.exceptions import ProtocolError, BadRequestProtocolError, FeatureNotImplementedProtocolError
from pyxmpp.stanza import Stanza
class StanzaProcessor:
"""Universal stanza handler/router class.
Provides facilities to set up custom handlers for various types of stanzas.
:Ivariables:
- `lock`: lock object used to synchronize access to the
`StanzaProcessor` object.
- `me`: local JID.
- `peer`: remote stream endpoint JID.
- `process_all_stanzas`: when `True` then all stanzas received are
considered local.
- `initiator`: `True` if local stream endpoint is the initiating entity.
"""
def __init__(self):
"""Initialize a `StanzaProcessor` object."""
self.me=None
self.peer=None
self.initiator=None
self.peer_authenticated=False
self.process_all_stanzas=True
self._iq_response_handlers=ExpiringDictionary()
self._iq_get_handlers={}
self._iq_set_handlers={}
self._message_handlers=[]
self._presence_handlers=[]
self.__logger=logging.getLogger("pyxmpp.Stream")
self.lock=threading.RLock()
def process_response(self, response):
"""Examines out the response returned by a stanza handler and sends all
stanzas provided.
:Returns:
- `True`: if `response` is `Stanza`, iterable or `True` (meaning the stanza was processed).
- `False`: when `response` is `False` or `None`
:returntype: `bool`
"""
if response is None or response is False:
return False
if isinstance(response, Stanza):
self.send(response)
return True
try:
response = iter(response)
except TypeError:
return bool(response)
for stanza in response:
if isinstance(stanza, Stanza):
self.send(stanza)
return True
def process_iq(self, stanza):
"""Process IQ stanza received.
:Parameters:
- `stanza`: the stanza received
If a matching handler is available pass the stanza to it.
Otherwise ignore it if it is "error" or "result" stanza
or return "feature-not-implemented" error."""
sid=stanza.get_id()
fr=stanza.get_from()
typ=stanza.get_type()
if typ in ("result","error"):
if fr:
ufr=fr.as_unicode()
else:
ufr=None
res_handler = err_handler = None
try:
res_handler, err_handler = self._iq_response_handlers.pop((sid,ufr))
except KeyError:
if ( (fr==self.peer or fr==self.me or fr==self.me.bare()) ):
try:
res_handler, err_handler = self._iq_response_handlers.pop((sid,None))
except KeyError:
pass
if None is res_handler is err_handler:
return False
if typ=="result":
response = res_handler(stanza)
else:
response = err_handler(stanza)
self.process_response(response)
return True
q=stanza.get_query()
if not q:
raise BadRequestProtocolError, "Stanza with no child element"
el=q.name
ns=q.ns().getContent()
if typ=="get":
if self._iq_get_handlers.has_key((el,ns)):
response = self._iq_get_handlers[(el,ns)](stanza)
self.process_response(response)
return True
else:
raise FeatureNotImplementedProtocolError, "Not implemented"
elif typ=="set":
if self._iq_set_handlers.has_key((el,ns)):
response = self._iq_set_handlers[(el,ns)](stanza)
self.process_response(response)
return True
else:
raise FeatureNotImplementedProtocolError, "Not implemented"
else:
raise BadRequestProtocolError, "Unknown IQ stanza type"
def __try_handlers(self,handler_list,typ,stanza):
""" Search the handler list for handlers matching
given stanza type and payload namespace. Run the
handlers found ordering them by priority until
the first one which returns `True`.
:Parameters:
- `handler_list`: list of available handlers
- `typ`: stanza type (value of its "type" attribute)
- `stanza`: the stanza to handle
:return: result of the last handler or `False` if no
handler was found."""
namespaces=[]
if stanza.xmlnode.children:
c=stanza.xmlnode.children
while c:
try:
ns=c.ns()
except libxml2.treeError:
ns=None
if ns is None:
c=c.next
continue
ns_uri=ns.getContent()
if ns_uri not in namespaces:
namespaces.append(ns_uri)
c=c.next
for handler_entry in handler_list:
t=handler_entry[1]
ns=handler_entry[2]
handler=handler_entry[3]
if t!=typ:
continue
if ns is not None and ns not in namespaces:
continue
response = handler(stanza)
if self.process_response(response):
return True
return False
def process_message(self,stanza):
"""Process message stanza.
Pass it to a handler of the stanza's type and payload namespace.
If no handler for the actual stanza type succeeds then hadlers
for type "normal" are used.
:Parameters:
- `stanza`: message stanza to be handled
"""
if not self.initiator and not self.peer_authenticated:
self.__logger.debug("Ignoring message - peer not authenticated yet")
return True
typ=stanza.get_type()
if self.__try_handlers(self._message_handlers,typ,stanza):
return True
if typ!="error":
return self.__try_handlers(self._message_handlers,"normal",stanza)
return False
def process_presence(self,stanza):
"""Process presence stanza.
Pass it to a handler of the stanza's type and payload namespace.
:Parameters:
- `stanza`: presence stanza to be handled
"""
if not self.initiator and not self.peer_authenticated:
self.__logger.debug("Ignoring presence - peer not authenticated yet")
return True
typ=stanza.get_type()
if not typ:
typ="available"
return self.__try_handlers(self._presence_handlers,typ,stanza)
def route_stanza(self,stanza):
"""Process stanza not addressed to us.
Return "recipient-unavailable" return if it is not
"error" nor "result" stanza.
This method should be overriden in derived classes if they
are supposed to handle stanzas not addressed directly to local
stream endpoint.
:Parameters:
- `stanza`: presence stanza to be processed
"""
if stanza.get_type() not in ("error","result"):
r = stanza.make_error_response("recipient-unavailable")
self.send(r)
return True
def process_stanza(self,stanza):
"""Process stanza received from the stream.
First "fix" the stanza with `self.fix_in_stanza()`,
then pass it to `self.route_stanza()` if it is not directed
to `self.me` and `self.process_all_stanzas` is not True. Otherwise
stanza is passwd to `self.process_iq()`, `self.process_message()`
or `self.process_presence()` appropriately.
:Parameters:
- `stanza`: the stanza received.
:returns: `True` when stanza was handled
"""
self.fix_in_stanza(stanza)
to=stanza.get_to()
if to and to.node == None and (not self.me
or to.domain == self.me.domain):
# workaround for OpenFire bug
# http://community.igniterealtime.org/thread/35966
to = None
if not self.process_all_stanzas and to and to!=self.me and to.bare()!=self.me.bare():
return self.route_stanza(stanza)
try:
if stanza.stanza_type=="iq":
if self.process_iq(stanza):
return True
elif stanza.stanza_type=="message":
if self.process_message(stanza):
return True
elif stanza.stanza_type=="presence":
if self.process_presence(stanza):
return True
except ProtocolError, e:
typ = stanza.get_type()
if typ != 'error' and (typ != 'result' or stanza.stanza_type != 'iq'):
r = stanza.make_error_response(e.xmpp_name)
self.send(r)
e.log_reported()
else:
e.log_ignored()
self.__logger.debug("Unhandled %r stanza: %r" % (stanza.stanza_type,stanza.serialize()))
return False
def check_to(self,to):
"""Check "to" attribute of received stream header.
:return: `to` if it is equal to `self.me`, None otherwise.
Should be overriden in derived classes which require other logic
for handling that attribute."""
if to!=self.me:
return None
return to
def set_response_handlers(self,iq,res_handler,err_handler,timeout_handler=None,timeout=300):
"""Set response handler for an IQ "get" or "set" stanza.
This should be called before the stanza is sent.
:Parameters:
- `iq`: an IQ stanza
- `res_handler`: result handler for the stanza. Will be called
when matching <iq type="result"/> is received. Its only
argument will be the stanza received. The handler may return
a stanza or list of stanzas which should be sent in response.
- `err_handler`: error handler for the stanza. Will be called
when matching <iq type="error"/> is received. Its only
argument will be the stanza received. The handler may return
a stanza or list of stanzas which should be sent in response
but this feature should rather not be used (it is better not to
respond to 'error' stanzas).
- `timeout_handler`: timeout handler for the stanza. Will be called
when no matching <iq type="result"/> or <iq type="error"/> is
received in next `timeout` seconds. The handler should accept
two arguments and ignore them.
- `timeout`: timeout value for the stanza. After that time if no
matching <iq type="result"/> nor <iq type="error"/> stanza is
received, then timeout_handler (if given) will be called.
"""
self.lock.acquire()
try:
self._set_response_handlers(iq,res_handler,err_handler,timeout_handler,timeout)
finally:
self.lock.release()
def _set_response_handlers(self,iq,res_handler,err_handler,timeout_handler=None,timeout=300):
"""Same as `Stream.set_response_handlers` but assume `self.lock` is acquired."""
self.fix_out_stanza(iq)
to=iq.get_to()
if to:
to=to.as_unicode()
if timeout_handler:
self._iq_response_handlers.set_item((iq.get_id(),to),
(res_handler,err_handler),
timeout,timeout_handler)
else:
self._iq_response_handlers.set_item((iq.get_id(),to),
(res_handler,err_handler),timeout)
def set_iq_get_handler(self,element,namespace,handler):
"""Set <iq type="get"/> handler.
:Parameters:
- `element`: payload element name
- `namespace`: payload element namespace URI
- `handler`: function to be called when a stanza
with defined element is received. Its only argument
will be the stanza received. The handler may return a stanza or
list of stanzas which should be sent in response.
Only one handler may be defined per one namespaced element.
If a handler for the element was already set it will be lost
after calling this method.
"""
self.lock.acquire()
try:
self._iq_get_handlers[(element,namespace)]=handler
finally:
self.lock.release()
def unset_iq_get_handler(self,element,namespace):
"""Remove <iq type="get"/> handler.
:Parameters:
- `element`: payload element name
- `namespace`: payload element namespace URI
"""
self.lock.acquire()
try:
if self._iq_get_handlers.has_key((element,namespace)):
del self._iq_get_handlers[(element,namespace)]
finally:
self.lock.release()
def set_iq_set_handler(self,element,namespace,handler):
"""Set <iq type="set"/> handler.
:Parameters:
- `element`: payload element name
- `namespace`: payload element namespace URI
- `handler`: function to be called when a stanza
with defined element is received. Its only argument
will be the stanza received. The handler may return a stanza or
list of stanzas which should be sent in response.
Only one handler may be defined per one namespaced element.
If a handler for the element was already set it will be lost
after calling this method."""
self.lock.acquire()
try:
self._iq_set_handlers[(element,namespace)]=handler
finally:
self.lock.release()
def unset_iq_set_handler(self,element,namespace):
"""Remove <iq type="set"/> handler.
:Parameters:
- `element`: payload element name.
- `namespace`: payload element namespace URI."""
self.lock.acquire()
try:
if self._iq_set_handlers.has_key((element,namespace)):
del self._iq_set_handlers[(element,namespace)]
finally:
self.lock.release()
def __add_handler(self,handler_list,typ,namespace,priority,handler):
"""Add a handler function to a prioritized handler list.
:Parameters:
- `handler_list`: a handler list.
- `typ`: stanza type.
- `namespace`: stanza payload namespace.
- `priority`: handler priority. Must be >=0 and <=100. Handlers
with lower priority list will be tried first."""
if priority<0 or priority>100:
raise ValueError,"Bad handler priority (must be in 0:100)"
handler_list.append((priority,typ,namespace,handler))
handler_list.sort()
def set_message_handler(self, typ, handler, namespace=None, priority=100):
"""Set a handler for <message/> stanzas.
:Parameters:
- `typ`: message type. `None` will be treated the same as "normal",
and will be the default for unknown types (those that have no
handler associated).
- `namespace`: payload namespace. If `None` that message with any
payload (or even with no payload) will match.
- `priority`: priority value for the handler. Handlers with lower
priority value are tried first.
- `handler`: function to be called when a message stanza
with defined type and payload namespace is received. Its only
argument will be the stanza received. The handler may return a
stanza or list of stanzas which should be sent in response.
Multiple <message /> handlers with the same type/namespace/priority may
be set. Order of calling handlers with the same priority is not defined.
Handlers will be called in priority order until one of them returns True or
any stanza(s) to send (even empty list will do).
"""
self.lock.acquire()
try:
if not typ:
typ = "normal"
self.__add_handler(self._message_handlers,typ,namespace,priority,handler)
finally:
self.lock.release()
def set_presence_handler(self,typ,handler,namespace=None,priority=100):
"""Set a handler for <presence/> stanzas.
:Parameters:
- `typ`: presence type. "available" will be treated the same as `None`.
- `namespace`: payload namespace. If `None` that presence with any
payload (or even with no payload) will match.
- `priority`: priority value for the handler. Handlers with lower
priority value are tried first.
- `handler`: function to be called when a presence stanza
with defined type and payload namespace is received. Its only
argument will be the stanza received. The handler may return a
stanza or list of stanzas which should be sent in response.
Multiple <presence /> handlers with the same type/namespace/priority may
be set. Order of calling handlers with the same priority is not defined.
Handlers will be called in priority order until one of them returns
True or any stanza(s) to send (even empty list will do).
"""
self.lock.acquire()
try:
if not typ:
typ="available"
self.__add_handler(self._presence_handlers,typ,namespace,priority,handler)
finally:
self.lock.release()
def fix_in_stanza(self,stanza):
"""Modify incoming stanza before processing it.
This implementation does nothig. It should be overriden in derived
classes if needed."""
pass
def fix_out_stanza(self,stanza):
"""Modify outgoing stanza before sending into the stream.
This implementation does nothig. It should be overriden in derived
classes if needed."""
pass
def send(self,stanza):
"""Send a stanza somwhere. This one does nothing. Should be overriden
in derived classes.
:Parameters:
- `stanza`: the stanza to send.
:Types:
- `stanza`: `pyxmpp.stanza.Stanza`"""
raise NotImplementedError,"This method must be overriden in derived classes."""
# vi: sts=4 et sw=4
|
Should I Eat Whole-Wheat Pasta?
When we pit brown foods against white foods, the earthier colors always get the health halo. But they’re not always deserved: take brown eggs, which may have a farm-to-table je ne sais quoi but are no more nutritious than eggs that come in a white shell—they just cost more. Pasta, on the other hand, is a different story. The whole-wheat version of everyone’s favorite cheese vehicle is way healthier, say five people who know about such things.
← 4 – Reasons to Fast. Just Add Water!
|
#!/usr/bin/env python
# LexGen.py - implemented 2002 by Neil Hodgson neilh@scintilla.org
# Released to the public domain.
# Regenerate the Scintilla source files that list all the lexers.
# Should be run whenever a new lexer is added or removed.
# Requires Python 2.5 or later
# Files are regenerated in place with templates stored in comments.
# The format of generation comments is documented in FileGenerator.py.
from FileGenerator import Regenerate, UpdateLineInFile, ReplaceREInFile
import ScintillaData
import HFacer
def UpdateVersionNumbers(sci, root):
UpdateLineInFile(root + "win32/ScintRes.rc", "#define VERSION_SCINTILLA",
"#define VERSION_SCINTILLA \"" + sci.versionDotted + "\"")
UpdateLineInFile(root + "win32/ScintRes.rc", "#define VERSION_WORDS",
"#define VERSION_WORDS " + sci.versionCommad)
UpdateLineInFile(root + "qt/ScintillaEditBase/ScintillaEditBase.pro",
"VERSION =",
"VERSION = " + sci.versionDotted)
UpdateLineInFile(root + "qt/ScintillaEdit/ScintillaEdit.pro",
"VERSION =",
"VERSION = " + sci.versionDotted)
UpdateLineInFile(root + "doc/ScintillaDownload.html", " Release",
" Release " + sci.versionDotted)
ReplaceREInFile(root + "doc/ScintillaDownload.html",
r"/scintilla/([a-zA-Z]+)\d\d\d",
r"/scintilla/\g<1>" + sci.version)
UpdateLineInFile(root + "doc/index.html",
' <font color="#FFCC99" size="3"> Release version',
' <font color="#FFCC99" size="3"> Release version ' +\
sci.versionDotted + '<br />')
UpdateLineInFile(root + "doc/index.html",
' Site last modified',
' Site last modified ' + sci.mdyModified + '</font>')
UpdateLineInFile(root + "doc/ScintillaHistory.html",
' Released ',
' Released ' + sci.dmyModified + '.')
def RegenerateAll(root):
sci = ScintillaData.ScintillaData(root)
Regenerate(root + "src/Catalogue.cxx", "//", sci.lexerModules)
Regenerate(root + "win32/scintilla.mak", "#", sci.lexFiles)
UpdateVersionNumbers(sci, root)
HFacer.RegenerateAll(root, False)
if __name__=="__main__":
RegenerateAll("../")
|
Phone us if you’re looking for a trustworthy, experienced and licensed Plumber Tennyson. We know that getting plumbing repairs in Tennyson can be a pain and you’ve got better things to do than look for a plumber. Adelaide 24 hour Plumbing will save you from any unnecessary hassle and expense for a Plumber Tennyson.
We make sure that wherever you need a Plumber Tennyson, Adelaide 24 hour Plumbing will assist you with your plumbing worries.
Plumbing problems with your taps, toilets, gas, hot water and drains are painful enough. You don’t need the extra stress of finding a Plumber Tennyson that you can trust. And what about all of those plumbers in Tennyson who don’t clean up after themselves, leaving mud and materials all over your home? Our professional team are different!
When you need plumbing services in Tennyson, chances are you need them now – not next week! So if you’re in Tennyson, call the plumbers who will get it fixed fast. Don’t risk the cost and mess of getting it wrong. Call the best and most highly regarded Plumber Tennyson. We will get your plumbing problems sorted out quick.
When you need a 24 hour plumber Tennyson, you want to be sure that the 24 hour plumber Tennyson you are calling can handle the job. We know what to do with a plumbing emergency in Tennyson. We have years of experience in taking care of emergencies in Tennyson and know how to diagnose a problem quickly. Our 24 hour plumber Tennyson has the experience to take care of your problem quickly.
We do not make you wait for a 24 hour plumber Tennyson. We guarantee that we will have a 24 hour plumber Tennyson at your home or business quickly. With us, you will not be getting a phone call from a lost 24 hour plumber Tennyson asking you for directions. We are your local 24 hour plumber Tennyson.
Do you have a Blocked drain Tennyson? We are the drain repair, drain camera and drain maintenance specialists in Tennyson.
For your blocked drain Tennyson needs, call the Tennyson plumbing specialists servicing all areas.
Adelaide 24 hour Plumbing is a plumbing company employing only professional Plumbers. We are great at clearing blocked drains Tennyson. We handle drainage issues such as clearing blocked drain Tennyson.
Blocked drain Tennyson is a common problem for not only older vintage homes but for new homes with incorrectly laid pipe work. Unfortunately, this is a common problem with more demand for quick cheap homes. In Tennyson, we use the latest in sewer and storm water drainage repair and diagnostic equipment.
Our emergency plumber Tennyson know how to handle a plumbing emergency. We have a large amount of experience in emergency plumber Tennyson. With the right experience, we can diagnose a problem super fast in Tennyson. We do not make you wait hours for an emergency plumber Tennyson. We guarantee that an emergency Plumber Tennyson will be at your door step quickly!
To us, a plumber Tennyson emergency does not mean a quick short term fix. We know that the last thing you want to do is book another appointment to have a plumber Tennyson visit you again. We will always do our very best to ensure we fix the problem first time around. That is why we send our emergency plumber Tennyson vehicles out equipped with the most common parts. If the emergency plumber Tennyson does not have the part on board, we will order it in quickly and put a short term solution. We have fantastic relationships with all our suppliers in and around Tennyson. They will do their best to give us the parts we need in an emergency Tennyson.
Drain Camera Tennyson surveys involve passing a small drain camera through your drains system to identify any issues which are hidden from normal sight above ground.
We deal with drain camera Tennyson on a regular basis. Best of all, our services never close. We can use our drain camera Tennyson around the clock. We have the equipment and skills to deal with all types of plumbing problems whenever they occur.
Need a Gas plumber Tennyson? Adelaide 24 hour Plumbing are your licensed gas professionals for gas installations and gas fittings Tennyson!
Don’t underestimate how dangerous gas in Tennyson can be. Make sure you use a gas plumber Tennyson. Gas plumber Tennyson services should always be carried out by professional licensed gas fitters.
Our professional plumbing team are well trained and experienced in Tennyson. We guarantee your job will be completed on schedule. We have a long established reputation for professionalism in Tennyson. Adelaide 24 hour Plumbing is your first choice for a reliable gas plumber Tennyson. We do gas fittings and gas installations throughout the Tennyson metropolitan area.
Adelaide 24 hour Plumbing is a leading gas plumber Tennyson. We have a great reputation for delivering quality gas plumbing services across the Tennyson metropolitan area.
Burst water pipes and leaking water mains are among the most common plumbing emergencies in Tennyson. It is easy to neglect the maintenance to the pipe system in and around your home. Fortunately, most leaks and burst pipes Tennyson are fixed easily. One phone call to us and we can fix, repair, replace and re-route your pipes. Call us now.
Our plumbers can diagnose a burst pipe Tennyson and either repair or replace it as required.
|
import io
from setuptools import setup
long_description = "See https://github.com/zerok/celery-prometheus-exporter"
with io.open('README.rst', encoding='utf-8') as fp:
long_description = fp.read()
setup(
name='celery-prometheus-exporter',
description="Simple Prometheus metrics exporter for Celery",
long_description=long_description,
version='1.7.0',
author='Horst Gutmann',
license='MIT',
author_email='horst@zerokspot.com',
url='https://github.com/zerok/celery-prometheus-exporter',
classifiers=[
'Development Status :: 3 - Alpha',
'Environment :: Console',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3 :: Only',
],
py_modules=[
'celery_prometheus_exporter',
],
install_requires=[
'celery>=3',
'prometheus_client>=0.0.20',
],
entry_points={
'console_scripts': [
'celery-prometheus-exporter = celery_prometheus_exporter:main',
],
}
)
|
2 Bedroom Apartments For Rent In Dc Minimalist Remodelling is an astounding picture that can use for individual and non-business purpose because all trademarks referenced thus are the properties of their particular proprietors.
Please share this 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling to your social media to share information about 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling to your friends and to keep this website growing. In the event that you need to see the photo in a full size simply hit the photo in the gallery below and the image will be displayed at the top of this page.
48 CONGRESS St SE WASHINGTON DC 48 MLS 48 Redfin Best 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 Best Apartments For Rent In Dallas TX With Pictures Extraordinary 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Anacostia Apartments For Rent Washington DC Apartments Awesome 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
What 4848 In Rent Gets You Across 480 US Cities Interesting 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
A New Owner Bought My Apartment And Wanted To Tear It Down Here's Extraordinary 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 NYC Studios That Prove Small Spaces Can Be Stylish Too Curbed NY Classy 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Contemporary Kitchen Cabinets In The Washington DC Area Unique 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Miramar Apartments 48 Reviews Washington DC Apartments For Rent Delectable 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Trying To Furnish Your Small Home These DCArea Stores Can Help Custom 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Apartment La Lisa Washington DC DC Booking Custom 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 Bedroom Apartments In The Woodlands Tx Superb Aura Memorial New 48 Stunning 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Trump Tower 48 Fifth Avenue Unit 48G 48 Bed Apt For Rent For Interesting 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 48th St Ne Washington DC 48 Realtor Custom 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
DC Couple On A Tight Budget Tries For 'netzero' Power On Fixer Amazing 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Gladys And David's Apartment Washington DC DC Booking Beauteous 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Colonel Apartments Washington DC Apartments Beauteous 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Modest 48 Bedroom Apartments In Dc For Top 48 Rent Minimalist Extraordinary 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 Bedroom Apartments In Dc Superb 48 Bedroom Apartments For Rent In Simple 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Nest DC Boutique Property Management Simple 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Industrial Washington DC Condo Conversion By Four Brothers LLC Interesting 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Washington DC Row House Design Renovation And Remodeling Mesmerizing 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Nest DC Boutique Property Management Stunning 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Classic Paris Apartment Goes Minimal With Stark Renovation Curbed Adorable 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Cat Friendly Apartments In DC No Pet Rent WC Smith Unique 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 Of The Best Minimalist Apartment Interiors Classy 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Nest DC Boutique Property Management Extraordinary 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Anacostia Apartments For Rent Washington DC Apartments Custom 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Washington DC Furnished Apartments Short Term Vacation Rentals DC Adorable 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Nest DC Boutique Property Management Impressive 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Trump Tower 48 Fifth Avenue Unit 48G 48 Bed Apt For Rent For Simple 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Get A Look Inside Ivanka Trump's 4848 Million DC Home Money Stunning 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Moving Wait Before You Renovate Frugalwoods Mesmerizing 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 Modern MicroApartments For Living Large In Big Cities Freshome Interesting 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Earthly And Ethereal An Apartment Makeover By Studio Oink Remodelista Gorgeous 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 Bass Pl Se Washington DC 48 Realtor Custom 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 SOUTHERN Ave SE WASHINGTON DC 48 MLS 48 Redfin Custom 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 Mindblowing AirbnbWorthy Homes In Singapore Cool 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Washington DC Furnished Apartments Short Term Vacation Rentals DC Interesting 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 Best Apartments For Rent In Tacoma WA With Pictures Awesome 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Apartments For Rent In Tacoma WA Zillow Delectable 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
48 Small Studio Apartments With Beautiful Design Great Studios Classy 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Fairway Park Apartments 48 Reviews Washington DC Apartments For Impressive 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Renovations Curbed Inspiration 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Renovation Of An Apartment In Barcelona By Laura Bonell Mas Fascinating 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
What 4848 In Rent Gets You Across 480 US Cities Custom 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Making More Space In A OneBedroom Apartment The New York Times Delectable 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
Top 48 Airbnb Vacation Rentals In Downtown Nashville TN Trip481 Mesmerizing 2 Bedroom Apartments For Rent In Dc Minimalist Remodelling.
|
#!/usr/bin/python
from __future__ import (absolute_import, division, print_function)
# Copyright 2019 Fortinet, Inc.
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <https://www.gnu.org/licenses/>.
__metaclass__ = type
ANSIBLE_METADATA = {'status': ['preview'],
'supported_by': 'community',
'metadata_version': '1.1'}
DOCUMENTATION = '''
---
module: fortios_firewall_multicast_policy6
short_description: Configure IPv6 multicast NAT policies in Fortinet's FortiOS and FortiGate.
description:
- This module is able to configure a FortiGate or FortiOS (FOS) device by allowing the
user to set and modify firewall feature and multicast_policy6 category.
Examples include all parameters and values need to be adjusted to datasources before usage.
Tested with FOS v6.0.5
version_added: "2.8"
author:
- Miguel Angel Munoz (@mamunozgonzalez)
- Nicolas Thomas (@thomnico)
notes:
- Requires fortiosapi library developed by Fortinet
- Run as a local_action in your playbook
requirements:
- fortiosapi>=0.9.8
options:
host:
description:
- FortiOS or FortiGate IP address.
type: str
required: false
username:
description:
- FortiOS or FortiGate username.
type: str
required: false
password:
description:
- FortiOS or FortiGate password.
type: str
default: ""
vdom:
description:
- Virtual domain, among those defined previously. A vdom is a
virtual instance of the FortiGate that can be configured and
used as a different unit.
type: str
default: root
https:
description:
- Indicates if the requests towards FortiGate must use HTTPS protocol.
type: bool
default: true
ssl_verify:
description:
- Ensures FortiGate certificate must be verified by a proper CA.
type: bool
default: true
version_added: 2.9
state:
description:
- Indicates whether to create or remove the object.
This attribute was present already in previous version in a deeper level.
It has been moved out to this outer level.
type: str
required: false
choices:
- present
- absent
version_added: 2.9
firewall_multicast_policy6:
description:
- Configure IPv6 multicast NAT policies.
default: null
type: dict
suboptions:
state:
description:
- B(Deprecated)
- Starting with Ansible 2.9 we recommend using the top-level 'state' parameter.
- HORIZONTALLINE
- Indicates whether to create or remove the object.
type: str
required: false
choices:
- present
- absent
action:
description:
- Accept or deny traffic matching the policy.
type: str
choices:
- accept
- deny
dstaddr:
description:
- IPv6 destination address name.
type: list
suboptions:
name:
description:
- Address name. Source firewall.multicast-address6.name.
required: true
type: str
dstintf:
description:
- IPv6 destination interface name. Source system.interface.name system.zone.name.
type: str
end_port:
description:
- Integer value for ending TCP/UDP/SCTP destination port in range (1 - 65535).
type: int
id:
description:
- Policy ID.
required: true
type: int
logtraffic:
description:
- Enable/disable logging traffic accepted by this policy.
type: str
choices:
- enable
- disable
protocol:
description:
- Integer value for the protocol type as defined by IANA (0 - 255).
type: int
srcaddr:
description:
- IPv6 source address name.
type: list
suboptions:
name:
description:
- Address name. Source firewall.address6.name firewall.addrgrp6.name.
required: true
type: str
srcintf:
description:
- IPv6 source interface name. Source system.interface.name system.zone.name.
type: str
start_port:
description:
- Integer value for starting TCP/UDP/SCTP destination port in range (1 - 65535).
type: int
status:
description:
- Enable/disable this policy.
type: str
choices:
- enable
- disable
'''
EXAMPLES = '''
- hosts: localhost
vars:
host: "192.168.122.40"
username: "admin"
password: ""
vdom: "root"
ssl_verify: "False"
tasks:
- name: Configure IPv6 multicast NAT policies.
fortios_firewall_multicast_policy6:
host: "{{ host }}"
username: "{{ username }}"
password: "{{ password }}"
vdom: "{{ vdom }}"
https: "False"
state: "present"
firewall_multicast_policy6:
action: "accept"
dstaddr:
-
name: "default_name_5 (source firewall.multicast-address6.name)"
dstintf: "<your_own_value> (source system.interface.name system.zone.name)"
end_port: "7"
id: "8"
logtraffic: "enable"
protocol: "10"
srcaddr:
-
name: "default_name_12 (source firewall.address6.name firewall.addrgrp6.name)"
srcintf: "<your_own_value> (source system.interface.name system.zone.name)"
start_port: "14"
status: "enable"
'''
RETURN = '''
build:
description: Build number of the fortigate image
returned: always
type: str
sample: '1547'
http_method:
description: Last method used to provision the content into FortiGate
returned: always
type: str
sample: 'PUT'
http_status:
description: Last result given by FortiGate on last operation applied
returned: always
type: str
sample: "200"
mkey:
description: Master key (id) used in the last call to FortiGate
returned: success
type: str
sample: "id"
name:
description: Name of the table used to fulfill the request
returned: always
type: str
sample: "urlfilter"
path:
description: Path of the table used to fulfill the request
returned: always
type: str
sample: "webfilter"
revision:
description: Internal revision number
returned: always
type: str
sample: "17.0.2.10658"
serial:
description: Serial number of the unit
returned: always
type: str
sample: "FGVMEVYYQT3AB5352"
status:
description: Indication of the operation's result
returned: always
type: str
sample: "success"
vdom:
description: Virtual domain used
returned: always
type: str
sample: "root"
version:
description: Version of the FortiGate
returned: always
type: str
sample: "v5.6.3"
'''
from ansible.module_utils.basic import AnsibleModule
from ansible.module_utils.connection import Connection
from ansible.module_utils.network.fortios.fortios import FortiOSHandler
from ansible.module_utils.network.fortimanager.common import FAIL_SOCKET_MSG
def login(data, fos):
host = data['host']
username = data['username']
password = data['password']
ssl_verify = data['ssl_verify']
fos.debug('on')
if 'https' in data and not data['https']:
fos.https('off')
else:
fos.https('on')
fos.login(host, username, password, verify=ssl_verify)
def filter_firewall_multicast_policy6_data(json):
option_list = ['action', 'dstaddr', 'dstintf',
'end_port', 'id', 'logtraffic',
'protocol', 'srcaddr', 'srcintf',
'start_port', 'status']
dictionary = {}
for attribute in option_list:
if attribute in json and json[attribute] is not None:
dictionary[attribute] = json[attribute]
return dictionary
def underscore_to_hyphen(data):
if isinstance(data, list):
for elem in data:
elem = underscore_to_hyphen(elem)
elif isinstance(data, dict):
new_data = {}
for k, v in data.items():
new_data[k.replace('_', '-')] = underscore_to_hyphen(v)
data = new_data
return data
def firewall_multicast_policy6(data, fos):
vdom = data['vdom']
if 'state' in data and data['state']:
state = data['state']
elif 'state' in data['firewall_multicast_policy6'] and data['firewall_multicast_policy6']:
state = data['firewall_multicast_policy6']['state']
else:
state = True
firewall_multicast_policy6_data = data['firewall_multicast_policy6']
filtered_data = underscore_to_hyphen(filter_firewall_multicast_policy6_data(firewall_multicast_policy6_data))
if state == "present":
return fos.set('firewall',
'multicast-policy6',
data=filtered_data,
vdom=vdom)
elif state == "absent":
return fos.delete('firewall',
'multicast-policy6',
mkey=filtered_data['id'],
vdom=vdom)
def is_successful_status(status):
return status['status'] == "success" or \
status['http_method'] == "DELETE" and status['http_status'] == 404
def fortios_firewall(data, fos):
if data['firewall_multicast_policy6']:
resp = firewall_multicast_policy6(data, fos)
return not is_successful_status(resp), \
resp['status'] == "success", \
resp
def main():
fields = {
"host": {"required": False, "type": "str"},
"username": {"required": False, "type": "str"},
"password": {"required": False, "type": "str", "default": "", "no_log": True},
"vdom": {"required": False, "type": "str", "default": "root"},
"https": {"required": False, "type": "bool", "default": True},
"ssl_verify": {"required": False, "type": "bool", "default": True},
"state": {"required": False, "type": "str",
"choices": ["present", "absent"]},
"firewall_multicast_policy6": {
"required": False, "type": "dict", "default": None,
"options": {
"state": {"required": False, "type": "str",
"choices": ["present", "absent"]},
"action": {"required": False, "type": "str",
"choices": ["accept", "deny"]},
"dstaddr": {"required": False, "type": "list",
"options": {
"name": {"required": True, "type": "str"}
}},
"dstintf": {"required": False, "type": "str"},
"end_port": {"required": False, "type": "int"},
"id": {"required": True, "type": "int"},
"logtraffic": {"required": False, "type": "str",
"choices": ["enable", "disable"]},
"protocol": {"required": False, "type": "int"},
"srcaddr": {"required": False, "type": "list",
"options": {
"name": {"required": True, "type": "str"}
}},
"srcintf": {"required": False, "type": "str"},
"start_port": {"required": False, "type": "int"},
"status": {"required": False, "type": "str",
"choices": ["enable", "disable"]}
}
}
}
module = AnsibleModule(argument_spec=fields,
supports_check_mode=False)
# legacy_mode refers to using fortiosapi instead of HTTPAPI
legacy_mode = 'host' in module.params and module.params['host'] is not None and \
'username' in module.params and module.params['username'] is not None and \
'password' in module.params and module.params['password'] is not None
if not legacy_mode:
if module._socket_path:
connection = Connection(module._socket_path)
fos = FortiOSHandler(connection)
is_error, has_changed, result = fortios_firewall(module.params, fos)
else:
module.fail_json(**FAIL_SOCKET_MSG)
else:
try:
from fortiosapi import FortiOSAPI
except ImportError:
module.fail_json(msg="fortiosapi module is required")
fos = FortiOSAPI()
login(module.params, fos)
is_error, has_changed, result = fortios_firewall(module.params, fos)
fos.logout()
if not is_error:
module.exit_json(changed=has_changed, meta=result)
else:
module.fail_json(msg="Error in repo", meta=result)
if __name__ == '__main__':
main()
|
A very quick post today because I’ve been out all day, and am tired. Plum has been unwell all week and if by turns very grumpy and very tired.
But anyway. I helped someone! I was at a school garden party today, and on seeing my feed Plum, another Mum asked for my advice. She’d been enjoying feeding her beautiful daughter, but was suddenly finding herself struggling. She’d been advised to do all sorts, including offering formula top-ups but really wanted to persevere with breastfeeding. She’d just lost her confidence. After a quick chat, it emerged they had hit the dreaded four-and-a-half-month growth spurt.
I was able to reassure her that baby’s sudden desire to feed almost constantly was absolutely normal, and that it would pass and they would come out the other side of it. I was also able to give her details of her local breastfeeding group and assure her she would be very welcome there. She seemed happier by the end of our chat.
I really hope I was able to make a difference, at least in reinforcing her confidence in her ability to feed her daughter.
|
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import datetime
import json
import os
import taskcluster
class TaskBuilder(object):
def __init__(self, task_id, repo_url, branch, commit, owner, source, scheduler_id):
self.task_id = task_id
self.repo_url = repo_url
self.branch = branch
self.commit = commit
self.owner = owner
self.source = source
self.scheduler_id = scheduler_id
def build_task(self, name, description, command, dependencies = [], artifacts = {}, scopes = [], features = {}, worker_type = 'github-worker'):
created = datetime.datetime.now()
expires = taskcluster.fromNow('1 year')
deadline = taskcluster.fromNow('1 day')
features = features.copy()
features.update({
"taskclusterProxy": True
})
return {
"workerType": worker_type,
"taskGroupId": self.task_id,
"schedulerId": self.scheduler_id,
"expires": taskcluster.stringDate(expires),
"retries": 5,
"created": taskcluster.stringDate(created),
"tags": {},
"priority": "lowest",
"deadline": taskcluster.stringDate(deadline),
"dependencies": [ self.task_id ] + dependencies,
"routes": [],
"scopes": scopes,
"requires": "all-completed",
"payload": {
"features": features,
"maxRunTime": 7200,
"image": "mozillamobile/focus-android:1.4",
"command": [
"/bin/bash",
"--login",
"-c",
command
],
"artifacts": artifacts,
"deadline": taskcluster.stringDate(deadline)
},
"provisionerId": "aws-provisioner-v1",
"metadata": {
"name": name,
"description": description,
"owner": self.owner,
"source": self.source
}
}
def craft_signing_task(self, build_task_id, name, description, signing_format, is_staging, apks=[], scopes=[], routes=[]):
created = datetime.datetime.now()
expires = taskcluster.fromNow('1 year')
deadline = taskcluster.fromNow('1 day')
return {
"workerType": 'mobile-signing-dep-v1' if is_staging else 'mobile-signing-v1',
"taskGroupId": self.task_id,
"schedulerId": self.scheduler_id,
"expires": taskcluster.stringDate(expires),
"retries": 5,
"created": taskcluster.stringDate(created),
"tags": {},
"priority": "lowest",
"deadline": taskcluster.stringDate(deadline),
"dependencies": [ self.task_id, build_task_id],
"routes": routes,
"scopes": scopes,
"requires": "all-completed",
"payload": {
"maxRunTime": 3600,
"upstreamArtifacts": [
{
"paths": apks,
"formats": [signing_format],
"taskId": build_task_id,
"taskType": "build"
}
]
},
"provisionerId": "scriptworker-prov-v1",
"metadata": {
"name": name,
"description": description,
"owner": self.owner,
"source": self.source
}
}
def craft_push_task(self, signing_task_id, name, description, is_staging, apks=[], scopes=[], channel='internal', commit=False):
created = datetime.datetime.now()
expires = taskcluster.fromNow('1 year')
deadline = taskcluster.fromNow('1 day')
return {
"workerType": 'mobile-pushapk-dep-v1' if is_staging else 'mobile-pushapk-v1',
"taskGroupId": self.task_id,
"schedulerId": self.scheduler_id,
"expires": taskcluster.stringDate(expires),
"retries": 5,
"created": taskcluster.stringDate(created),
"tags": {},
"priority": "lowest",
"deadline": taskcluster.stringDate(deadline),
"dependencies": [ self.task_id, signing_task_id],
"routes": [],
"scopes": scopes,
"requires": "all-completed",
"payload": {
"commit": commit,
"channel": channel,
"upstreamArtifacts": [
{
"paths": apks,
"taskId": signing_task_id,
"taskType": "signing"
}
]
},
"provisionerId": "scriptworker-prov-v1",
"metadata": {
"name": name,
"description": description,
"owner": self.owner,
"source": self.source
}
}
def schedule_task(queue, taskId, task):
print "TASK", taskId
print json.dumps(task, indent=4, separators=(',', ': '))
result = queue.createTask(taskId, task)
print "RESULT", taskId
print json.dumps(result)
|
Only the Great Ocean Road separates you from the beach with Angahook National Park as your back yard!
Enjoy spectacular unbroken ocean views stretching from Moggs Creek to Lorne. Nestled at the foot of the enchanting Angahook National Park a cosy and comfortable 4 bedroom home with breath taking ocean views is the perfect retreat.
Upstairs the main living, kitchen and dining area has commanding ocean views with the comfort of a wood fire for those chilly days. The upstairs main bedroom with ensuite has a spectacular ocean and mountain vista. The sheltered deck provides protection from the wind where you can enjoy lazing on the bench, dinner on the deck or a sumptuous BBQ for 8.
|
# Stairbuilder - Tread generation
#
# Generates treads for stair generation.
# Stair Type (typ):
# - id1 = Freestanding staircase
# - id2 = Housed-open staircase
# - id3 = Box staircase
# - id4 = Circular staircase
# Tread Type (typ_t):
# - tId1 = Classic
# - tId2 = Basic Steel
# - tId3 = Bar 1
# - tId4 = Bar 2
# - tId5 = Bar 3
#
# Paul "BrikBot" Marshall
# Created: September 19, 2011
# Last Modified: January 26, 2012
# Homepage (blog): http://post.darkarsenic.com/
# //blog.darkarsenic.com/
#
# Coded in IDLE, tested in Blender 2.61.
# Search for "@todo" to quickly find sections that need work.
#
# ##### BEGIN GPL LICENSE BLOCK #####
#
# Stairbuilder is for quick stair generation.
# Copyright (C) 2011 Paul Marshall
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# ##### END GPL LICENSE BLOCK #####
import mathutils
from copy import copy
from math import radians, sqrt
from mathutils import Matrix, Vector
class Treads:
def __init__(self,G,typ,typ_t,run,w,h,d,r,toe,o,n,tk,sec,sp,sn,deg=4):
self.G = G #General
self.typ = typ #Stair type
self.typ_t = typ_t #Tread type
self.run = run #Stair run. Degrees if self.typ == "id4"
self.w=w #tread width. Is outer radius if self.typ == "id4"
self.h=h #tread height
self.d=d #tread run. Ignore for now if self.typ == "id4"
self.r=r #tread rise
self.t=toe #tread nosing
self.o=o #tread side overhang. Is inner radius if self.typ == "id4"
self.n=n #number of treads
self.tk=tk #thickness of tread metal
self.sec=sec #metal sections for tread
if sec != 1 and typ_t not in ["tId4", "tId5"]:
self.sp=((d+toe)*(sp/100))/(sec-1) #spacing between sections (% of depth)
elif typ_t in ["tId4", "tId5"]:
self.sp=sp/100 #keep % value
else:
self.sp=0
self.sn=sn #number of cross sections
self.deg = deg #number of section per "slice". Only applys if self.typ == "id4"
self.tId2_faces = [[0,1,2,3],[0,3,4,5],[4,5,6,7],[6,7,8,9],[8,9,10,11],
[12,13,14,15],[12,15,16,17],[16,17,18,19],
[18,19,20,21],[20,21,22,23],[0,1,13,12],[1,2,14,13],
[2,3,15,14],[3,4,16,15],[4,7,19,16],[7,8,20,19],
[8,11,23,20],[11,10,22,23],[10,9,21,22],[9,6,18,21],
[6,5,17,18],[5,0,12,17]]
self.out_faces = [[0,2,3,1],[0,2,10,8],[9,11,3,1],[9,11,10,8],
[2,6,7,3],[2,6,14,10],[11,15,7,3],[11,15,14,10],
[0,4,5,1],[0,4,12,8],[9,13,5,1],[9,13,12,8],
[4,6,7,5],[4,6,14,12],[13,15,14,12],[13,15,7,5]]
self.Create()
def Create(self):
# Setup the coordinates:
coords = []
coords2 = []
coords3 = []
cross = 0
cW = 0
depth = 0
offset = 0
height = 0
if self.typ in ["id1", "id2", "id3"]:
if self.typ_t == "tId1":
coords.append(Vector([-self.t,-self.o,0]))
coords.append(Vector([self.d,-self.o,0]))
coords.append(Vector([-self.t,self.w + self.o,0]))
coords.append(Vector([self.d,self.w + self.o,0]))
for i in range(4):
coords.append(coords[i]+Vector([0,0,-self.h]))
elif self.typ_t == "tId2":
depth = (self.d + self.t - (self.sec - 1) * self.sp) / self.sec
inset = depth / 4
tDepth = depth - self.t
coords.append(Vector([-self.t, -self.o, -self.h])) #0
coords.append(Vector([inset - self.t, -self.o, -self.h])) #1
coords.append(Vector([inset - self.t, -self.o, -self.h + self.tk])) #2
coords.append(Vector([self.tk - self.t, -self.o, -self.h + self.tk])) #3
coords.append(Vector([self.tk - self.t, -self.o, -self.tk])) #4
coords.append(Vector([-self.t, -self.o, 0])) #5
coords.append(Vector([tDepth, -self.o, 0])) #6
coords.append(Vector([tDepth - self.tk, -self.o, -self.tk])) #7
coords.append(Vector([tDepth - self.tk, -self.o, self.tk - self.h])) #8
coords.append(Vector([tDepth, -self.o, -self.h])) #9
coords.append(Vector([tDepth - inset, -self.o, -self.h])) #10
coords.append(Vector([tDepth - inset, -self.o, -self.h + self.tk])) #11
for i in range(12):
coords.append(coords[i] + Vector([0, self.w + (2 * self.o), 0]))
elif self.typ_t in ["tId3", "tId4", "tId5"]:
# Frame:
coords.append(Vector([-self.t,-self.o,-self.h]))
coords.append(Vector([self.d,-self.o,-self.h]))
coords.append(Vector([-self.t,-self.o,0]))
coords.append(Vector([self.d,-self.o,0]))
for i in range(4):
if (i % 2) == 0:
coords.append(coords[i] + Vector([self.tk,self.tk,0]))
else:
coords.append(coords[i] + Vector([-self.tk,self.tk,0]))
for i in range(4):
coords.append(coords[i] + Vector([0,self.w + self.o,0]))
for i in range(4):
coords.append(coords[i + 4] + Vector([0,self.w + self.o - (2 * self.tk),0]))
# Tread sections:
if self.typ_t == "tId3":
offset = (self.tk * sqrt(2)) / 2
topset = self.h - offset
self.sp = ((self.d + self.t - (2 * self.tk)) - (offset * (self.sec) + topset)) / (self.sec + 1)
baseX = -self.t + self.sp + self.tk
coords2.append(Vector([baseX, self.tk - self.o, offset - self.h]))
coords2.append(Vector([baseX + offset, self.tk - self.o, -self.h]))
for i in range(2):
coords2.append(coords2[i] + Vector([topset, 0, topset]))
for i in range(4):
coords2.append(coords2[i] + Vector([0, (self.w + self.o) - (2 * self.tk), 0]))
elif self.typ_t in ["tId4", "tId5"]:
offset = ((self.run + self.t) * self.sp) / (self.sec + 1)
topset = (((self.run + self.t) * (1 - self.sp)) - (2 * self.tk)) / self.sec
baseX = -self.t + self.tk + offset
baseY = self.w + self.o - 2 * self.tk
coords2.append(Vector([baseX, -self.o + self.tk, -self.h / 2]))
coords2.append(Vector([baseX + topset, -self.o + self.tk, -self.h / 2]))
coords2.append(Vector([baseX, -self.o + self.tk, 0]))
coords2.append(Vector([baseX + topset, -self.o + self.tk, 0]))
for i in range(4):
coords2.append(coords2[i] + Vector([0, baseY, 0]))
# Tread cross-sections:
if self.typ_t in ["tId3", "tId4"]:
cW = self.tk
cross = (self.w + (2 * self.o) - (self.sn + 2) * self.tk) / (self.sn + 1)
else: # tId5
spacing = self.sp ** (1 / 4)
cross = ((2*self.o + self.w) * spacing) / (self.sn + 1)
cW = (-2*self.tk + (2*self.o + self.w) * (1 - spacing)) / self.sn
self.sp = topset
height = -self.h / 2
baseY = -self.o + self.tk + cross
coords3.append(Vector([-self.t + self.tk, baseY, -self.h]))
coords3.append(Vector([self.d - self.tk, baseY, -self.h]))
coords3.append(Vector([-self.t + self.tk, baseY, height]))
coords3.append(Vector([self.d - self.tk, baseY, height]))
for i in range(4):
coords3.append(coords3[i] + Vector([0, cW, 0]))
# Make the treads:
for i in range(self.n):
if self.typ_t == "tId1":
self.G.Make_mesh(coords,self.G.faces,'treads')
elif self.typ_t == "tId2":
temp = []
for j in coords:
temp.append(copy(j))
for j in range(self.sec):
self.G.Make_mesh(temp, self.tId2_faces, 'treads')
for k in temp:
k += Vector([depth + self.sp, 0, 0])
elif self.typ_t in ["tId3", "tId4", "tId5"]:
self.G.Make_mesh(coords,self.out_faces,'treads')
temp = []
for j in coords2:
temp.append(copy(j))
for j in range(self.sec):
self.G.Make_mesh(temp,self.G.faces,'bars')
for k in temp:
k += Vector([offset + self.sp, 0, 0])
for j in coords2:
j += Vector([self.d, 0, self.r])
temp = []
for j in coords3:
temp.append(copy(j))
for j in range(self.sn):
self.G.Make_mesh(temp,self.G.faces,'crosses')
for k in temp:
k += Vector([0, cW + cross, 0])
for j in coords3:
j += Vector([self.d, 0, self.r])
for j in coords:
j += Vector([self.d,0,self.r])
# Circular staircase:
elif self.typ in ["id4"]:
start = [Vector([0, -self.o, 0]), Vector([0, -self.o, -self.h]),
Vector([0, -self.w, 0]), Vector([0, -self.w, -self.h])]
self.d = radians(self.run) / self.n
for i in range(self.n):
coords = []
# Base faces. Should be able to append more sections:
tId4_faces = [[0, 1, 3, 2]]
t_inner = Matrix.Rotation((-self.t / self.o) + (self.d * i), 3, 'Z')
coords.append((t_inner * start[0]) + Vector([0, 0, self.r * i]))
coords.append((t_inner * start[1]) + Vector([0, 0, self.r * i]))
t_outer = Matrix.Rotation((-self.t / self.w) + (self.d * i), 3, 'Z')
coords.append((t_outer * start[2]) + Vector([0, 0, self.r * i]))
coords.append((t_outer * start[3]) + Vector([0, 0, self.r * i]))
k = 0
for j in range(self.deg + 1):
k = (j * 4) + 4
tId4_faces.append([k, k - 4, k - 3, k + 1])
tId4_faces.append([k - 2, k - 1, k + 3, k + 2])
tId4_faces.append([k + 1, k - 3, k - 1, k + 3])
tId4_faces.append([k, k - 4, k - 2, k + 2])
rot = Matrix.Rotation(((self.d * j) / self.deg) + (self.d * i), 3, 'Z')
for v in start:
coords.append((rot * v) + Vector([0, 0, self.r * i]))
tId4_faces.append([k, k + 1, k + 3, k + 2])
self.G.Make_mesh(coords, tId4_faces, 'treads')
return
|
Very friendly staff and very helpful indeed.location is excellent.
Clean and comfortable would recommend this hotel any time.
New Hotel Alcántara offers comfortable and modern installations at the heart of Seville's historical center. It is located in the Santa Cruz district, next to the famous Murillo gardens, declared "of Cultural interest". The hotel could not be in a better location, just a few minutes' walk away from the Cathedral, the Alcázar Palace, the Plaza de España, the river and everything that constitutes a usual visit of Seville. During your stay in Hotel Alcántara, you will be able to enjoy strolling through the narrow streets of the famous Barrio de Santa Cruz, the old Jewish quarter, where we are situated. Just next to the Hotel is located the city center's shopping area, as well as a variety of bars and restaurants. You can leave your car in the public car park nearby. You access this 18th century aristocratic mansion through the old coach entrance and over what were the servant quarters and the kitchen garden. This entrance brings you to a modern, bright and welcoming building where you are sure to feel at home.
|
#!/usr/bin/env python
#-*- coding: utf8 -*-
from dialog import *
import database as db
import dbconf
import re
import sys
import os
if os.environ.has_key('SUDO_USER'):
user = os.environ['SUDO_USER']
else:
user = 'root'
userfromdb = db.select('users', where="login = '%s'" % user)
if len(userfromdb) == 0:
print 'Votre utilisateur n\'a pas été autorisé à avoir un site.'
print 'Merci de contacter l\'administrateur.'
sys.exit()
id_user = list(userfromdb)[0].id
if len(sys.argv) > 1:
default = sys.argv[1]
else:
default = ""
while True:
domain = text('Nom de domaine du site à éditer :', default)
if re.match(r'^([-a-zA-Z0-9_]+\.)+(fr|eu|cc|com|org|net|info|name|be)$', domain):
break
default = ""
sites = db.query("""SELECT websites.*, domains.name
FROM websites, domains
WHERE websites.id_domains = domains.id
AND domains.name = '%s'
AND websites.id_users = '%s'""" % (domain, id_user))
if len(sites) == 0:
print 'Aucun site portant ce domaine n\'existe sous votre nom'
sys.exit()
site = list(sites)[0]
site_id = site.id
try:
if site.enabled == "yes":
enabled = choices('Voulez-vous Éditer ou Désactiver le site ?', dict(e='edit', d='no'), default='e')
else:
enabled = choices('Voulez-vous Éditer ou Activer le site ?', dict(e='edit', a='yes'), default='e')
except KeyboardInterrupt:
print
sys.exit()
if enabled == "edit":
config = editor(filling=site.config.encode('utf8')).decode('utf8')
db.update('websites', where='id = $site_id', config=config, vars=locals())
print 'La configuration de %s a été mise à jour.' % domain
else:
db.update('websites', where='id = $site_id', enabled=enabled, vars=locals())
print 'Le site %s a été %s' % (domain, {'yes':'activé', 'no':'désactivé'}[enabled])
print 'N\'oubliez pas de relancer Lighttpd pour l\'appliquer'
print 'avec restart-lighttpd.'
|
I’m using Red Heart Gumdrop color smoothie which is a teal blue variegated color on the Yes Yes Shawl. It’s very soft and when I look at the shawl it reminds me of butterfly wings! The yes yes shawl is turning out beautiful. Thanks Michael for the pattern and sharing this. It so lovely.
I have some of that same yarn and had no idea what to make with it…thank you so much…I had gone with the glam navy shimmer but that will be another one that I make.
Hello Mikey,many thanks to the useful videos.
I’m trying to know how I can participate? Did you release the video for this challenge and the pattern yet?
I am a newcomer to the Crochet Crowd, and have not done a challenge before. How do I get involved, get the instructions, etc? Unfortunately, I won’t be able to view the videos as I live Beyond the Void, in the WA Cascade Mts. and have a data limit with my satellite internet… But I want to try, even if I am late starting!
Yes…I have constructive feedback. DON’T CHANGE A THING! You are a wonderful teacher and a pleasure to learn from. The negativity you had to read was sad and unnecessary. This afghan is outside “many” of our comfort zones…but that’s the fun part! You said at the beginning it would be spectacular … and so it SHALL be. Thank you for all you do Mikey.
Me too, seven days a week. Unless the kids come home, I don’t even realize it’s the weekend.
However, I was wondering the same thing. I like the drape of the blue shawl pictured on the challenge page, and the yellow one in the pattern appears much smaller. You mentioned that the pattern is rather smallish. To make it the size of the blue, How would one go about enlarging it? The pattern begins at the point, so I just keep going, until I am satisfied? Or do I need to add some width at some point?
I love this one, and I can’t wait to dig in!
Running spell check might help.
if I want to make this a larger sized shawl, how much more yarn will I need? Your sample in light blue looks bigger than the pattern pic.
Hi Mikey, I am looking forward to doing this challenge and am wondering when the pattern will come out. It is suppose to come out today isn’t it?
I see the video but where can I print the diagram and instructions?
I am very much looking forward to this challenge! Coincidently, both Paton’s Glam Stripes and Bernat Sheepish Stripes were on clearance at my local store this week, so of course, I had to buy them both. Maybe I’ll be doubly challenged.
I am also excited to start as this is another first for me, I have yarn waiting with hook in hand. Thank you Mikey and keep those great videos available, I love them.
I’m ready! I’ll be using Caron Simply Soft in yellow.
Counting down until we get started!
How complicated will the shawl be. I am not an experienced crocheter. I have never tried one before because they look to hard.
Greetings! I have some RH Unforgettable and I believe that this yarn stripes pretty gradually. Would this yarn be appropriate to use for this project? I know that this yarn is more like a light-worsted, but it is pretty comparable to the Caron SS Light. Any guidance would be greatly appreciated!
Thanks and I am looking forward to the next challenge!
I am looking forward to this shawl challenge. I have always wanted a shawl but never made one before. Thanks Mikey. I am glad there will be a video tutorial.
Tonight you shared a picture of a pillow that had the word LOVE on it. The V was a rainbow colored heart. It was like a graphghan. Wouldn’t it be fun to do that pattern as one of the challenges? For those of us that have never done a graphghan, it would be the perfect size to begin. You could do your video tutorial on it for those of us that are more visual learners. I think it would be a huge success. What do you think????
What about Red Heart Super Saver??? This is the only yarn that I buy.
Can’t wait for this Shawl Challenge to begin. I really want to make a shawl. Looking forward to this.
I have been wanting to do one of your challenges. I am always looking for new shawl patterns since I crochet for a prayer shawl ministry at my church. I already ordered my yarn. Can’t wait to get started.
I love being in the loop with all these beautiful challenges. I have mae hats, scarves, and afghans for my family, but never anything for myself. I am doing the mystery challenge and I love it. It will be about the right size for me and my little teacup chi. Mikey I love your tutorials as they are very clear. I do not usally do things out of my comfort zone as I am very OCD, but I’m trying to do better. Seeing all the beautiful things on the facebook page makes me want to try. Thank you for all you do for us crocheters.
Look forward to starting the shawl.
|
# -*- coding: utf-8 -*-
from django.db import models, migrations
from django.conf import settings
import hs_core.models
class Migration(migrations.Migration):
dependencies = [
('auth', '0001_initial'),
('pages', '__first__'),
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('contenttypes', '0001_initial'),
('hs_core', '0001_initial'),
]
operations = [
migrations.CreateModel(
name='BandInformation',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('object_id', models.PositiveIntegerField()),
('name', models.CharField(max_length=500, null=True)),
('variableName', models.TextField(max_length=100, null=True)),
('variableUnit', models.CharField(max_length=50, null=True)),
('method', models.TextField(null=True, blank=True)),
('comment', models.TextField(null=True, blank=True)),
('content_type', models.ForeignKey(related_name='hs_geo_raster_resource_bandinformation_related', to='contenttypes.ContentType')),
],
options={
'abstract': False,
},
bases=(models.Model,),
),
migrations.CreateModel(
name='CellInformation',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('object_id', models.PositiveIntegerField()),
('name', models.CharField(max_length=500, null=True)),
('rows', models.IntegerField(null=True)),
('columns', models.IntegerField(null=True)),
('cellSizeXValue', models.FloatField(null=True)),
('cellSizeYValue', models.FloatField(null=True)),
('cellSizeUnit', models.CharField(max_length=50, null=True)),
('cellDataType', models.CharField(max_length=50, null=True)),
('noDataValue', models.FloatField(null=True)),
('content_type', models.ForeignKey(related_name='hs_geo_raster_resource_cellinformation_related', to='contenttypes.ContentType')),
],
options={
},
bases=(models.Model,),
),
migrations.CreateModel(
name='OriginalCoverage',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('object_id', models.PositiveIntegerField()),
('_value', models.CharField(max_length=1024, null=True)),
('content_type', models.ForeignKey(related_name='hs_geo_raster_resource_originalcoverage_related', to='contenttypes.ContentType')),
],
options={
},
bases=(models.Model,),
),
migrations.CreateModel(
name='RasterMetaData',
fields=[
('coremetadata_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='hs_core.CoreMetaData')),
],
options={
},
bases=('hs_core.coremetadata',),
),
migrations.CreateModel(
name='RasterResource',
fields=[
('page_ptr', models.OneToOneField(parent_link=True, auto_created=True, primary_key=True, serialize=False, to='pages.Page')),
('comments_count', models.IntegerField(default=0, editable=False)),
('rating_count', models.IntegerField(default=0, editable=False)),
('rating_sum', models.IntegerField(default=0, editable=False)),
('rating_average', models.FloatField(default=0, editable=False)),
('public', models.BooleanField(default=True, help_text='If this is true, the resource is viewable and downloadable by anyone')),
('frozen', models.BooleanField(default=False, help_text='If this is true, the resource should not be modified')),
('do_not_distribute', models.BooleanField(default=False, help_text='If this is true, the resource owner has to designate viewers')),
('discoverable', models.BooleanField(default=True, help_text='If this is true, it will turn up in searches.')),
('published_and_frozen', models.BooleanField(default=False, help_text='Once this is true, no changes can be made to the resource')),
('content', models.TextField()),
('short_id', models.CharField(default=hs_core.models.short_id, max_length=32, db_index=True)),
('doi', models.CharField(help_text=b"Permanent identifier. Never changes once it's been set.", max_length=1024, null=True, db_index=True, blank=True)),
('object_id', models.PositiveIntegerField(null=True, blank=True)),
('content_type', models.ForeignKey(blank=True, to='contenttypes.ContentType', null=True)),
('creator', models.ForeignKey(related_name='creator_of_hs_geo_raster_resource_rasterresource', to=settings.AUTH_USER_MODEL, help_text='This is the person who first uploaded the resource')),
('edit_groups', models.ManyToManyField(help_text='This is the set of Hydroshare Groups who can edit the resource', related_name='group_editable_hs_geo_raster_resource_rasterresource', null=True, to='auth.Group', blank=True)),
('edit_users', models.ManyToManyField(help_text='This is the set of Hydroshare Users who can edit the resource', related_name='user_editable_hs_geo_raster_resource_rasterresource', null=True, to=settings.AUTH_USER_MODEL, blank=True)),
('last_changed_by', models.ForeignKey(related_name='last_changed_hs_geo_raster_resource_rasterresource', to=settings.AUTH_USER_MODEL, help_text='The person who last changed the resource', null=True)),
('owners', models.ManyToManyField(help_text='The person who has total ownership of the resource', related_name='owns_hs_geo_raster_resource_rasterresource', to=settings.AUTH_USER_MODEL)),
('user', models.ForeignKey(related_name='rasterresources', verbose_name='Author', to=settings.AUTH_USER_MODEL)),
('view_groups', models.ManyToManyField(help_text='This is the set of Hydroshare Groups who can view the resource', related_name='group_viewable_hs_geo_raster_resource_rasterresource', null=True, to='auth.Group', blank=True)),
('view_users', models.ManyToManyField(help_text='This is the set of Hydroshare Users who can view the resource', related_name='user_viewable_hs_geo_raster_resource_rasterresource', null=True, to=settings.AUTH_USER_MODEL, blank=True)),
],
options={
'ordering': ('_order',),
'verbose_name': 'Geographic Raster',
},
bases=('pages.page', models.Model),
),
migrations.AlterUniqueTogether(
name='originalcoverage',
unique_together=set([('content_type', 'object_id')]),
),
migrations.AlterUniqueTogether(
name='cellinformation',
unique_together=set([('content_type', 'object_id')]),
),
migrations.RemoveField(
model_name='cellinformation',
name='cellSizeUnit',
),
]
|
The pilot hydrogen storage and production facility that EPFL has built in Martigny (VS) had a public open house yesterday. It was an opportunity to get a glimpse into the fueling stations of the future.
In just a few years, this request won’t be a fictional one. The clean vehicles we occasionally see on the roads will increasingly become the norm.
Whether they are electric with lithium batteries or a fuel cell, or burn hydrogen instead of petroleum, these vehicles will still need, like today’s cars, to be regularly refueled. “Electric cars still need lengthy periods of time to recharge – several hours on the 230-volt grid”, says Hubert Girault, director of EPFL’s Laboratory of Physical and Analytical Electrochemistry.
One of the avenues being explored in his lab is found at the pilot facility in Martigny (Valais). It is a device that stores electricity in a mega-flow battery and then releases it as direct current. The mega-flow battery acts as a buffer between the electricity being produced (e.g. by wind) and its rapid transfer to a vehicle, which could be charged up in a short time. “These mega-batteries are capable of delivering 500 volts at 300 amperes, like Tesla’s Supercharger stations,” explains the professor.
The pilot facility in Martigny is based on a vanadium redox-flow battery. Unlike traditional lead or lithium batteries, in which the charge accumulates in electrodes, in this battery the charge accumulates in liquid electrolytes. The power is proportional to the electrodes’ surface area and the accumulated energy is proportional to the reservoir volume. The technology is intrinsically extremely safe, with no risk of explosion.
These mega-flow batteries, some of which can produce hundreds of megawatts, do have one drawback: what do you do when the battery is full, but the wind-turbine providing the electricity is still running?
EPFL has an answer: you store it as hydrogen. This energy-rich gas can be burned in a combustion engine or used in a fuel cell to produce electricity. In this pilot facility their device combines the two stages of the process, foreshadowing the fuel stations of the future that will be able to provide both direct current and hydrogen.
EPFL deliberately chose to build the mega-flow battery in Martigny’s water treatment station, in partnership with the District of Martingy, CREM (Centre de recherches énergétiques et municipales) and the public works (Sinergy). In addition to sourcing energy off a wind-energy project in the region, the facility will also use the hydrogen it produces to completely turn the biogas generated when treating waste water into methane. Biogas is a mixture of methane and CO2 produced by the anaerobic digestion of organic matter; once it is "methanized," it can be used to fuel vehicles that run on natural gas.
“In the longm we believe that this site will be very advantageous as a platform for testing a variety of technologies that will be necessary in the transition towards renewable sources of energy,” says Girault.
|
import operator
import os
import shutil
import tempfile
import subprocess
from twisted.internet.defer import fail
from landscape.lib.testing import ProcessDataBuilder
from landscape.client.monitor.activeprocessinfo import ActiveProcessInfo
from landscape.client.tests.helpers import LandscapeTest, MonitorHelper
from mock import ANY, Mock, patch
class ActiveProcessInfoTest(LandscapeTest):
"""Active process info plugin tests."""
helpers = [MonitorHelper]
def setUp(self):
"""Initialize helpers and sample data builder."""
LandscapeTest.setUp(self)
self.sample_dir = tempfile.mkdtemp()
self.builder = ProcessDataBuilder(self.sample_dir)
self.mstore.set_accepted_types(["active-process-info"])
def tearDown(self):
"""Clean up sample data artifacts."""
shutil.rmtree(self.sample_dir)
LandscapeTest.tearDown(self)
def test_first_run_includes_kill_message(self):
"""Test ensures that the first run queues a kill-processes message."""
plugin = ActiveProcessInfo(uptime=10)
self.monitor.add(plugin)
plugin.exchange()
message = self.mstore.get_pending_messages()[0]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("kill-all-processes" in message)
self.assertEqual(message["kill-all-processes"], True)
self.assertTrue("add-processes" in message)
def test_only_first_run_includes_kill_message(self):
"""Test ensures that only the first run queues a kill message."""
self.builder.create_data(672, self.builder.TRACING_STOP,
uid=1000, gid=1000, started_after_boot=10,
process_name="blarpy")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=10)
self.monitor.add(plugin)
self.monitor.exchange()
self.builder.create_data(671, self.builder.STOPPED, uid=1000,
gid=1000, started_after_boot=15,
process_name="blargh")
self.monitor.exchange()
messages = self.mstore.get_pending_messages()
self.assertEqual(len(messages), 2)
message = messages[0]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("kill-all-processes" in message)
self.assertTrue("add-processes" in message)
message = messages[1]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("add-processes" in message)
def test_terminating_process_race(self):
"""Test that the plugin handles process termination races.
There is a potential race in the time between getting a list
of process directories in C{/proc} and reading
C{/proc/<process-id>/status} or C{/proc/<process-id>/stat}.
The process with C{<process-id>} may terminate and causing
status (or stat) to be removed in this window, resulting in an
file-not-found IOError.
This test simulates race behaviour by creating a directory for
a process without a C{status} or C{stat} file.
"""
directory = tempfile.mkdtemp()
try:
os.mkdir(os.path.join(directory, "42"))
plugin = ActiveProcessInfo(proc_dir=directory, uptime=10)
self.monitor.add(plugin)
plugin.exchange()
finally:
shutil.rmtree(directory)
def test_read_proc(self):
"""Test reading from /proc."""
plugin = ActiveProcessInfo(uptime=10)
self.monitor.add(plugin)
plugin.exchange()
messages = self.mstore.get_pending_messages()
self.assertTrue(len(messages) > 0)
self.assertTrue("add-processes" in messages[0])
def test_read_sample_data(self):
"""Test reading a sample set of process data."""
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=1030, process_name="init")
self.builder.create_data(671, self.builder.STOPPED, uid=1000,
gid=1000, started_after_boot=1110,
process_name="blargh")
self.builder.create_data(672, self.builder.TRACING_STOP,
uid=1000, gid=1000, started_after_boot=1120,
process_name="blarpy")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
plugin.exchange()
message = self.mstore.get_pending_messages()[0]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("kill-all-processes" in message)
self.assertTrue("add-processes" in message)
expected_process_0 = {"state": b"R", "gid": 0, "pid": 1,
"vm-size": 11676, "name": "init", "uid": 0,
"start-time": 103, "percent-cpu": 0.0}
expected_process_1 = {"state": b"T", "gid": 1000, "pid": 671,
"vm-size": 11676, "name": "blargh", "uid": 1000,
"start-time": 111, "percent-cpu": 0.0}
expected_process_2 = {"state": b"t", "gid": 1000, "pid": 672,
"vm-size": 11676, "name": "blarpy", "uid": 1000,
"start-time": 112, "percent-cpu": 0.0}
processes = message["add-processes"]
processes.sort(key=operator.itemgetter("pid"))
self.assertEqual(processes, [expected_process_0, expected_process_1,
expected_process_2])
def test_skip_non_numeric_subdirs(self):
"""Test ensures the plugin doesn't touch non-process dirs in /proc."""
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=1120, process_name="init")
directory = os.path.join(self.sample_dir, "acpi")
os.mkdir(directory)
self.assertTrue(os.path.isdir(directory))
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
plugin.exchange()
message = self.mstore.get_pending_messages()[0]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("kill-all-processes" in message)
self.assertTrue("add-processes" in message)
expected_process = {"pid": 1, "state": b"R", "name": "init",
"vm-size": 11676, "uid": 0, "gid": 0,
"start-time": 112, "percent-cpu": 0.0}
self.assertEqual(message["add-processes"], [expected_process])
def test_plugin_manager(self):
"""Test plugin manager integration."""
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=1100, process_name="init")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
self.monitor.exchange()
self.assertMessages(
self.mstore.get_pending_messages(),
[{"type": "active-process-info",
"kill-all-processes": True,
"add-processes": [{"pid": 1, "state": b"R", "name": "init",
"vm-size": 11676, "uid": 0, "gid": 0,
"start-time": 110, "percent-cpu": 0.0}]}])
def test_process_terminated(self):
"""Test that the plugin handles process changes in a diff-like way."""
# This test is *too big*
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=1010, process_name="init")
self.builder.create_data(671, self.builder.STOPPED, uid=1000,
gid=1000, started_after_boot=1020,
process_name="blargh")
self.builder.create_data(672, self.builder.TRACING_STOP,
uid=1000, gid=1000, started_after_boot=1040,
process_name="blarpy")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
plugin.exchange()
# Terminate a process and start another.
self.builder.remove_data(671)
self.builder.create_data(12753, self.builder.RUNNING,
uid=0, gid=0, started_after_boot=1070,
process_name="wubble")
plugin.exchange()
messages = self.mstore.get_pending_messages()
self.assertEqual(len(messages), 2)
# The first time the plugin runs we expect all known processes
# to be killed.
message = messages[0]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("kill-all-processes" in message)
self.assertEqual(message["kill-all-processes"], True)
self.assertTrue("add-processes" in message)
expected_process_0 = {"state": b"R", "gid": 0, "pid": 1,
"vm-size": 11676, "name": "init",
"uid": 0, "start-time": 101,
"percent-cpu": 0.0}
expected_process_1 = {"state": b"T", "gid": 1000, "pid": 671,
"vm-size": 11676, "name": "blargh",
"uid": 1000, "start-time": 102,
"percent-cpu": 0.0}
expected_process_2 = {"state": b"t", "gid": 1000, "pid": 672,
"vm-size": 11676, "name": "blarpy",
"uid": 1000, "start-time": 104,
"percent-cpu": 0.0}
processes = message["add-processes"]
processes.sort(key=operator.itemgetter("pid"))
self.assertEqual(processes, [expected_process_0, expected_process_1,
expected_process_2])
# Report diff-like changes to processes, such as terminated
# processes and new processes.
message = messages[1]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("add-processes" in message)
self.assertEqual(len(message["add-processes"]), 1)
expected_process = {"state": b"R", "gid": 0, "pid": 12753,
"vm-size": 11676, "name": "wubble",
"uid": 0, "start-time": 107,
"percent-cpu": 0.0}
self.assertEqual(message["add-processes"], [expected_process])
self.assertTrue("kill-processes" in message)
self.assertEqual(len(message["kill-processes"]), 1)
self.assertEqual(message["kill-processes"], [671])
def test_only_queue_message_when_process_data_is_available(self):
"""Test ensures that messages are only queued when data changes."""
self.builder.create_data(672, self.builder.TRACING_STOP,
uid=1000, gid=1000, started_after_boot=10,
process_name="blarpy")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=10)
self.monitor.add(plugin)
plugin.exchange()
self.assertEqual(len(self.mstore.get_pending_messages()), 1)
plugin.exchange()
self.assertEqual(len(self.mstore.get_pending_messages()), 1)
def test_only_report_active_processes(self):
"""Test ensures the plugin only reports active processes."""
self.builder.create_data(672, self.builder.DEAD,
uid=1000, gid=1000, started_after_boot=10,
process_name="blarpy")
self.builder.create_data(673, self.builder.ZOMBIE,
uid=1000, gid=1000, started_after_boot=12,
process_name="blarpitty")
self.builder.create_data(674, self.builder.RUNNING,
uid=1000, gid=1000, started_after_boot=13,
process_name="blarpie")
self.builder.create_data(675, self.builder.STOPPED,
uid=1000, gid=1000, started_after_boot=14,
process_name="blarping")
self.builder.create_data(676, self.builder.TRACING_STOP,
uid=1000, gid=1000, started_after_boot=15,
process_name="floerp")
self.builder.create_data(677, self.builder.DISK_SLEEP,
uid=1000, gid=1000, started_after_boot=18,
process_name="floerpidity")
self.builder.create_data(678, self.builder.SLEEPING,
uid=1000, gid=1000, started_after_boot=21,
process_name="floerpiditting")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=10)
self.monitor.add(plugin)
plugin.exchange()
messages = self.mstore.get_pending_messages()
self.assertEqual(len(messages), 1)
message = messages[0]
self.assertTrue("kill-all-processes" in message)
self.assertTrue("kill-processes" not in message)
self.assertTrue("add-processes" in message)
pids = [process["pid"] for process in message["add-processes"]]
pids.sort()
self.assertEqual(pids, [673, 674, 675, 676, 677, 678])
def test_report_interesting_state_changes(self):
"""Test ensures that interesting state changes are reported."""
self.builder.create_data(672, self.builder.RUNNING,
uid=1000, gid=1000, started_after_boot=10,
process_name="blarpy")
# Report a running process.
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=10)
self.monitor.add(plugin)
plugin.exchange()
messages = self.mstore.get_pending_messages()
self.assertEqual(len(messages), 1)
message = messages[0]
self.assertTrue("kill-all-processes" in message)
self.assertTrue("kill-processes" not in message)
self.assertTrue("add-processes" in message)
self.assertEqual(message["add-processes"][0]["pid"], 672)
self.assertEqual(message["add-processes"][0]["state"], b"R")
# Convert the process to a zombie and ensure it gets reported.
self.builder.remove_data(672)
self.builder.create_data(672, self.builder.ZOMBIE,
uid=1000, gid=1000, started_after_boot=10,
process_name="blarpy")
plugin.exchange()
messages = self.mstore.get_pending_messages()
self.assertEqual(len(messages), 2)
message = messages[1]
self.assertTrue("kill-all-processes" not in message)
self.assertTrue("update-processes" in message)
self.assertEqual(message["update-processes"][0]["state"], b"Z")
def test_call_on_accepted(self):
"""
L{MonitorPlugin}-based plugins can provide a callable to call
when a message type becomes accepted.
"""
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10)
self.monitor.add(plugin)
self.assertEqual(len(self.mstore.get_pending_messages()), 0)
result = self.monitor.fire_event(
"message-type-acceptance-changed", "active-process-info", True)
def assert_messages(ignored):
self.assertEqual(len(self.mstore.get_pending_messages()), 1)
result.addCallback(assert_messages)
return result
def test_resynchronize_event(self):
"""
When a C{resynchronize} event occurs, with 'process' scope, we should
clear the information held in memory by the activeprocess monitor.
"""
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=1030, process_name="init")
self.builder.create_data(671, self.builder.STOPPED, uid=1000,
gid=1000, started_after_boot=1110,
process_name="blargh")
self.builder.create_data(672, self.builder.TRACING_STOP,
uid=1000, gid=1000, started_after_boot=1120,
process_name="blarpy")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
plugin.exchange()
messages = self.mstore.get_pending_messages()
expected_messages = [{"add-processes": [
{"gid": 1000,
"name": u"blarpy",
"pid": 672,
"start-time": 112,
"state": b"t",
"uid": 1000,
"vm-size": 11676,
"percent-cpu": 0.0},
{"gid": 0,
"name": u"init",
"pid": 1,
"start-time": 103,
"state": b"R",
"uid": 0,
"vm-size": 11676,
"percent-cpu": 0.0},
{"gid": 1000,
"name": u"blargh",
"pid": 671,
"start-time": 111,
"state": b"T",
"uid": 1000,
"vm-size": 11676,
"percent-cpu": 0.0}],
"kill-all-processes": True,
"type": "active-process-info"}]
self.assertMessages(messages, expected_messages)
plugin.exchange()
messages = self.mstore.get_pending_messages()
# No new messages should be pending
self.assertMessages(messages, expected_messages)
process_scope = ["process"]
self.reactor.fire("resynchronize", process_scope)
plugin.exchange()
messages = self.mstore.get_pending_messages()
# The resynchronisation should cause the same messages to be generated
# again.
expected_messages.extend(expected_messages)
self.assertMessages(messages, expected_messages)
def test_resynchronize_event_resets_session_id(self):
"""
When a C{resynchronize} event occurs a new session id is acquired so
that future messages can be sent.
"""
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
session_id = plugin._session_id
plugin.client.broker.message_store.drop_session_ids()
self.reactor.fire("resynchronize")
plugin.exchange()
self.assertNotEqual(session_id, plugin._session_id)
def test_resynchronize_event_with_global_scope(self):
"""
When a C{resynchronize} event occurs the L{_reset} method should be
called on L{ActiveProcessInfo}.
"""
self.builder.create_data(672, self.builder.TRACING_STOP,
uid=1000, gid=1000, started_after_boot=1120,
process_name="blarpy")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
plugin.exchange()
messages = self.mstore.get_pending_messages()
expected_messages = [{"add-processes": [
{"gid": 1000,
"name": u"blarpy",
"pid": 672,
"start-time": 112,
"state": b"t",
"uid": 1000,
"vm-size": 11676,
"percent-cpu": 0.0}],
"kill-all-processes": True,
"type": "active-process-info"}]
self.assertMessages(messages, expected_messages)
plugin.exchange()
messages = self.mstore.get_pending_messages()
# No new messages should be pending
self.assertMessages(messages, expected_messages)
self.reactor.fire("resynchronize")
plugin.exchange()
messages = self.mstore.get_pending_messages()
# The resynchronisation should cause the same messages to be generated
# again.
expected_messages.extend(expected_messages)
self.assertMessages(messages, expected_messages)
def test_do_not_resynchronize_with_other_scope(self):
"""
When a C{resynchronize} event occurs, with an irrelevant scope, we
should do nothing.
"""
self.builder.create_data(672, self.builder.TRACING_STOP,
uid=1000, gid=1000, started_after_boot=1120,
process_name="blarpy")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
plugin.exchange()
messages = self.mstore.get_pending_messages()
expected_messages = [{"add-processes": [
{"gid": 1000,
"name": u"blarpy",
"pid": 672,
"start-time": 112,
"state": b"t",
"uid": 1000,
"vm-size": 11676,
"percent-cpu": 0.0}],
"kill-all-processes": True,
"type": "active-process-info"}]
self.assertMessages(messages, expected_messages)
plugin.exchange()
messages = self.mstore.get_pending_messages()
# No new messages should be pending
self.assertMessages(messages, expected_messages)
disk_scope = ["disk"]
self.reactor.fire("resynchronize", disk_scope)
plugin.exchange()
messages = self.mstore.get_pending_messages()
# The resynchronisation should not have fired, so we won't see any
# additional messages here.
self.assertMessages(messages, expected_messages)
def test_do_not_persist_changes_when_send_message_fails(self):
"""
When the plugin is run it persists data that it uses on
subsequent checks to calculate the delta to send. It should
only persist data when the broker confirms that the message
sent by the plugin has been sent.
"""
class MyException(Exception):
pass
self.log_helper.ignore_errors(MyException)
self.builder.create_data(672, self.builder.RUNNING,
uid=1000, gid=1000, started_after_boot=10,
process_name="python")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=10)
self.monitor.add(plugin)
self.monitor.broker.send_message = Mock(
return_value=fail(MyException()))
message = plugin.get_message()
def assert_message(message_id):
self.assertEqual(message, plugin.get_message())
result = plugin.exchange()
result.addCallback(assert_message)
self.monitor.broker.send_message.assert_called_once_with(
ANY, ANY, urgent=ANY)
return result
def test_process_updates(self):
"""Test updates to processes are successfully reported."""
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=1100, process_name="init",)
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
with patch.object(plugin.registry, 'flush') as flush_mock:
plugin.exchange()
flush_mock.assert_called_once_with()
flush_mock.reset_mock()
messages = self.mstore.get_pending_messages()
self.assertEqual(len(messages), 1)
self.builder.remove_data(1)
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=1100,
process_name="init", vmsize=20000)
plugin.exchange()
flush_mock.assert_called_once_with()
messages = self.mstore.get_pending_messages()
self.assertEqual(len(messages), 2)
self.assertMessages(messages, [{"timestamp": 0,
"api": b"3.2",
"type": "active-process-info",
"kill-all-processes": True,
"add-processes": [{"start-time": 110,
"name": u"init",
"pid": 1,
"percent-cpu": 0.0,
"state": b"R",
"gid": 0,
"vm-size": 11676,
"uid": 0}]},
{"timestamp": 0,
"api": b"3.2",
"type": "active-process-info",
"update-processes": [
{"start-time": 110,
"name": u"init",
"pid": 1,
"percent-cpu": 0.0,
"state": b"R",
"gid": 0,
"vm-size": 20000,
"uid": 0}]}])
class PluginManagerIntegrationTest(LandscapeTest):
helpers = [MonitorHelper]
def setUp(self):
LandscapeTest.setUp(self)
self.sample_dir = self.makeDir()
self.builder = ProcessDataBuilder(self.sample_dir)
self.mstore.set_accepted_types(["active-process-info",
"operation-result"])
def get_missing_pid(self):
popen = subprocess.Popen(["hostname"], stdout=subprocess.PIPE)
popen.wait()
return popen.pid
def get_active_process(self):
return subprocess.Popen(["python", "-c", "raw_input()"],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
def test_read_long_process_name(self):
"""Test reading a process with a long name."""
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=1030,
process_name="NetworkManagerDaemon")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=2000,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
plugin.exchange()
message = self.mstore.get_pending_messages()[0]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("kill-all-processes" in message)
self.assertTrue("add-processes" in message)
expected_process_0 = {"state": b"R", "gid": 0, "pid": 1,
"vm-size": 11676, "name": "NetworkManagerDaemon",
"uid": 0, "start-time": 103, "percent-cpu": 0.0}
processes = message["add-processes"]
self.assertEqual(processes, [expected_process_0])
def test_strip_command_line_name_whitespace(self):
"""Whitespace should be stripped from command-line names."""
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=30,
process_name=" postgres: writer process ")
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10)
self.monitor.add(plugin)
plugin.exchange()
message = self.mstore.get_pending_messages()[0]
self.assertEqual(message["add-processes"][0]["name"],
u"postgres: writer process")
def test_read_process_with_no_cmdline(self):
"""Test reading a process without a cmdline file."""
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=1030,
process_name="ProcessWithLongName",
generate_cmd_line=False)
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=100,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
plugin.exchange()
message = self.mstore.get_pending_messages()[0]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("kill-all-processes" in message)
self.assertTrue("add-processes" in message)
expected_process_0 = {"state": b"R", "gid": 0, "pid": 1,
"vm-size": 11676, "name": "ProcessWithLong",
"uid": 0, "start-time": 103, "percent-cpu": 0.0}
processes = message["add-processes"]
self.assertEqual(processes, [expected_process_0])
def test_generate_cpu_usage(self):
"""
Test that we can calculate the CPU usage from system information and
the /proc/<pid>/stat file.
"""
stat_data = "1 Process S 1 0 0 0 0 0 0 0 " \
"0 0 20 20 0 0 0 0 0 0 3000 0 " \
"0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0"
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=None,
process_name="Process",
generate_cmd_line=False,
stat_data=stat_data)
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=400,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
plugin.exchange()
message = self.mstore.get_pending_messages()[0]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("kill-all-processes" in message)
self.assertTrue("add-processes" in message)
processes = message["add-processes"]
expected_process_0 = {"state": b"R", "gid": 0, "pid": 1,
"vm-size": 11676, "name": u"Process",
"uid": 0, "start-time": 300,
"percent-cpu": 4.00}
processes = message["add-processes"]
self.assertEqual(processes, [expected_process_0])
def test_generate_cpu_usage_capped(self):
"""
Test that we can calculate the CPU usage from system information and
the /proc/<pid>/stat file, the CPU usage should be capped at 99%.
"""
stat_data = "1 Process S 1 0 0 0 0 0 0 0 " \
"0 0 500 500 0 0 0 0 0 0 3000 0 " \
"0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0"
self.builder.create_data(1, self.builder.RUNNING, uid=0, gid=0,
started_after_boot=None,
process_name="Process",
generate_cmd_line=False,
stat_data=stat_data)
plugin = ActiveProcessInfo(proc_dir=self.sample_dir, uptime=400,
jiffies=10, boot_time=0)
self.monitor.add(plugin)
plugin.exchange()
message = self.mstore.get_pending_messages()[0]
self.assertEqual(message["type"], "active-process-info")
self.assertTrue("kill-all-processes" in message)
self.assertTrue("add-processes" in message)
processes = message["add-processes"]
expected_process_0 = {"state": b"R", "gid": 0, "pid": 1,
"vm-size": 11676, "name": u"Process",
"uid": 0, "start-time": 300,
"percent-cpu": 99.00}
processes = message["add-processes"]
self.assertEqual(processes, [expected_process_0])
|
OK, I am trying to do 30 tiny paintings in 30 days.
Here is the first, “Tooth & Nail.” oil on canvas, 5″ x 5″.
This entry was posted in art, contemporary art, fine art, new art, painting and tagged Hussalonia, Never Be Famous, Tooth & Nail. Bookmark the permalink.
Great idea! I’m going to do the same. Thanks for the inspiration.
Cool, Bloomers! Hope you’ll post them on your blog as you go- I look forward to seeing them.
|
# These imports move Python 2.x almost to Python 3.
# They must precede anything except #comments, including even the docstring
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from future_builtins import *
__version__ = "1.3.0"
__author__ = "David Cortesi"
__copyright__ = "Copyright 2011, 2012, 2013 David Cortesi"
__maintainer__ = "David Cortesi"
__email__ = "tallforasmurf@yahoo.com"
__license__ = '''
License (GPL-3.0) :
This file is part of PPQT.
PPQT is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You can find a copy of the GNU General Public License in the file
extras/COPYING.TXT included in the distribution of this program, or see:
<http://www.gnu.org/licenses/>.
'''
'''
Implement the Footnote managament panel, whose chief feature is a table
of footnotes that is re/built with a Refresh button. Important nomenclature:
A footnote KEY is a symbol that links an ANCHOR to a NOTE.
An Anchor
* appears in text but never in column 0 (never at start of line),
* never appears inside a word, so [oe] is not an anchor,
* has a Key in square brackets with no superfluous spaces, e.g. [A] or [2].
A Note
* begins on a line that follows its matching anchor
* always begins in column 0 with [Footnote k: where k is a Key.
* always ends with a right square bracket at end of line.
It is not required that Keys be unique. (It is normal for most Keys in a PG
text to be proofed as "A" and a few as "B".) However it is expected and required
that (a) the Anchor with Key k precedes the Note with the matching Key k,
and (b) Notes with the same Key appear in the same sequence as their anchors.
A Note may contain an Anchor, but Notes may NOT be nested. A Note anchored in
another Note must be outside the other note. A note may contain square brackets
so long as the contained brackets do not end a line. This is valid:
Text[A] and more text[A]
...
[Footnote A: this note has[i] an anchor.]
[Footnote A: this is the second note A and runs
to multiple -- [text in brackets but not at end of line] --
lines]
[Footnote i: inner note anchored in first note A.]
The footnote table has these columns:
Key: The key text from a footnote, e.g. A or iv or 92.
Class: The class of the key, one of:
ABC uppercase alpha
IVX uppercase roman numeral
123 decimal
abc lowercase alpha
ivx lowercase roman numeral
*\u00A4\u00A5 symbols
Ref Line: The text block (line) number containing the anchor
Note Line: The text block number of the matching Note
Length: The length in lines of the matched Note
Text: The opening text of the Note e.g. [Footnote A: This note has...
The example above might produce the following table:
Key Class Ref Line Note Line Length Text
A ABC 1535 1570 1 Footnote A: this note has[i..
A ABC 1535 1571 3 Footnote A: this is the sec..
i ivx 1570 1574 1 Footnote i: inner note refe..
The table interacts as follows.
* Clicking Key jumps the edit text to the Ref line, unless it is on the ref
line in which case it jumps to the Note line, in other words, click/click
the Key to ping-pong between the Ref and the Note.
* Clicking Ref Line jumps the edit text to that line with the Key
(not the whole Anchor) selected.
* Clicking Note line jumps the edit text to that line with the Note selected.
* When a Key or a Note is not matched, its row is pink.
* When the Lines value is >10, or Note Line minus Ref Line is >50, the row
is pale green
The actual data behind the table is a Python list of dicts where each dict
describes one Key and/or Note (both, when they match), with these elements:
'K' : Key symbol as a QString
'C' : Key class number
'R' : QTextCursor with position/anchor selecting the Key in the Ref, or None
'N' : QTextCursor selecting the Note, or None
If an Anchor is found, K has the Key and R selects the Key.
If a Note is found, K has the key and N selects the Note.
When a Ref and a Note are matched, all fields are set.
Note we don't pull out the line numbers but rather get them as needed from the
QTextCursors. This is because Qt keeps the cursors updated as the document
is edited, so edits that don't modify Refs or Notes don't need Refresh to keep
the table current.
When Refresh is clicked, this list of dicts is rebuilt by scanning the whole
document with regexs to find Anchors and Notes, and matching them.
During Refresh, found Keys are assigned to a number class based on their
values, with classes expressed as regular expressions:
Regex Assumed class
[IVXLCDM]{1,19} IVX
[A-Z]{1,2} ABC
[1-9]{1,3} 123
[ivxlcdm]{1,19} ivx
[a-z]{1,2} abc
[*\u00a4\u00a7\u00b6\u2020\u20221] symbols *, para, currency, dagger, dbl-dagger
(Apart from the symbols these tests are not unicode-aware, e.g. the ABC class
does not encompass uppercase Cyrillic, only the Latin-1 letters. In Qt5 it may
be possible to code a regex to detect Unicode upper- or lowercase, and we can
revisit allowing e.g. Keys with Greek letters.)
Other controls supplied at the bottom of the panel are:
Renumber Streams: a box with the six Key classes and for each, a popup
giving the choice of renumber stream:
1,2,..9999
A,B,..ZZZ
I,II,..MMMM
a,b,..zzz
i,ii,..mmmm
no renumber
There are five unique number streams, set to 0 at the start of a renumber
operation and incremented before use, and formatted in one of five ways.
The initial assignment of classes to streams is:
123 : 1,2,..9999
ABC : A,B,..ZZZ
IVX : A,B,..ZZZ
abc : a,b,..zzz
ivx : a,b,..zzz
sym : no renumber
A typical book has only ABC keys, or possibly ABC and also ixv or 123 Keys.
There is unavoidable ambiguity between alpha and roman classes. Although an
alpha key with only roman letters is classed as roman, the renumber stream
for roman is initialized to the alpha number stream.
In other words, the ambiguity is resolved in favor of treating all alphas
as alphas. If the user actually wants a roman stream, she can e.g. set
class ivx to use stream i,ii..m. Setting either roman Class to use a
roman Stream causes the alpha class of that case to change to no-renumber.
Setting an alpha class to use any stream causes the roman stream of that
case to also use the same stream. Thus we will not permit a user to try
to have both an alpha stream AND a roman stream of the same letter case
at the same time.
The Renumber button checks for any nonmatched keys and only shows an error
dialog if any exist. Else it causes all Keys in the table to be renumbered
using the stream assigned to their class. This is a single-undo operation.
A Footnote Section is marked off using /F .. F/ markers (which are ignored by
the reflow code). The Move Notes button asks permission with a warning message.
On OK, it scans the document and makes a list of QTextCursors of the body of
all Footnote Sections. If none are found it shows an error and stops. If the
last one found is above the last Note in the table, it shows an error and stops.
Else it scans the Notes in the table from bottom up. For each note, if the note
is not already inside a Footnote section, its contents are inserted at the
head of the Footnote section next below it and deleted at the
original location. The QTextCursor in the table is repositioned.
The database of footnotes built by Refresh and shown in the table is cleared
on the DocHasChanged signal from pqMain, so it has to be rebuilt after any
book is loaded, and isn't saved. We should think about adding the footnote
info to the document metadata, but only if the Refresh operation proves to be
too lengthy to bear.
'''
from PyQt4.QtCore import (
Qt,
QAbstractTableModel,QModelIndex,
QChar, QString, QStringList,
QRegExp,
QVariant,
SIGNAL)
from PyQt4.QtGui import (
QBrush, QColor,
QComboBox,
QItemDelegate,
QSpacerItem,
QTableView,
QGroupBox,
QHBoxLayout, QVBoxLayout,
QHeaderView,
QLabel,
QLineEdit,
QPushButton,
QSpinBox,
QTextCursor,
QWidget)
import pqMsgs
# -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
# This code is global and relates to creating the "database" of footnotes.
# -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
# Right, let's get some constants defined globally
# KeyClass_* gives sequential integer values to the classes.
KeyClass_IVX = 0
KeyClass_ABC = 1
KeyClass_ivx = 2
KeyClass_abc = 3
KeyClass_123 = 4
KeyClass_sym = 5
# name strings in KeyClass_* numeric order
KeyClassNames = (
QString(u'IVX'),
QString(u'ABC'),
QString(u'ivx'),
QString(u'abc'),
QString(u'123'),
QString(u'*\u00a4\u00a7') )
# stream names as a QStringList in KeyClass_* numeric order
# (used in comboboxes)
StreamNames = QStringList(QString(u'I,II..M')) << \
QString(u'A,B,..ZZZ') << \
QString(u'i,ii..mm') << \
QString(u'a,b,..zzz') << \
QString(u'1,2,..9999') << \
QString(u'no renumber')
# class-detecting REs in KeyClass_* numeric order
ClassREs = (
u'[IVXLCDM]{1,19}', # ROMAN to MMMMDCCCCLXXXXVIII (4998)
u'[A-Z]{1,3}', # ALPHA to ZZZ
u'[ivxlcdm]{1,19}', # roman to whatever
u'[a-z]{1,3}', # alpha to zzz
u'\d{1,4}', # decimal to 9999
u'[\*\u00a4\u00a7\u00b6\u2020\u2021]' # star currency section para dagger dbl-dagger
)
# In order to not find [oe] as an anchor assiduously skip such keys
TheDreadedOE = QString(u'OE')
# The regex for finding a Ref to any possible Key class.
RefClassMatch = u'\[(' + u'|'.join(ClassREs) + u')\]'
RefFinderRE = QRegExp(RefClassMatch)
# The similar regex for finding the head of a Note of any Key class.
NoteFinderRE = QRegExp( u'\[Footnote\s+(' + u'|'.join(ClassREs) + u')\s*\:' )
# Some notes about QTextCursors. A cursor is connected to a document (our main
# document) and has an anchor and a position. If .anchor() != .position() there
# is a selection. Qt doesn't care which is lower (closer to the top of the doc)
# but we take pains herein that .anchor() < .position(), i.e. the cursor is
# "positioned" at the end of the selection, the anchor at the start.
# Given a QTextCursor that selects an Anchor, return its line number.
# (Also used for text cursors that index /F and F/ lines.)
def refLineNumber(tc):
if tc is not None:
return tc.block().blockNumber() # block number for tc.position()
return None
# Given a QTextCursor that selects a Note, return its line number, which is
# the block number for the anchor, not necessarily that of the position.
def noteLineNumber(tc):
if tc is not None:
return tc.document().findBlock(tc.anchor()).blockNumber()
return None
# Given a QTextCursor that selects a Note, return the number of lines in it.
def noteLineLength(tc):
if tc is not None:
return 1 + tc.blockNumber() - \
tc.document().findBlock(tc.anchor()).blockNumber()
return 0
# Given a QString that is a Key, return the class of the Key.
# single-class Regexes based on ClassREs above, tupled with the code.
ClassQRegExps = (
(KeyClass_IVX, QRegExp(ClassREs[KeyClass_IVX])),
(KeyClass_ABC, QRegExp(ClassREs[KeyClass_ABC])),
(KeyClass_123, QRegExp(ClassREs[KeyClass_123])),
(KeyClass_ivx, QRegExp(ClassREs[KeyClass_ivx])),
(KeyClass_abc, QRegExp(ClassREs[KeyClass_abc])),
(KeyClass_sym, QRegExp(ClassREs[KeyClass_sym]))
)
def classOfKey(qs):
for (keyclass,regex) in ClassQRegExps:
if 0 == regex.indexIn(qs):
return keyclass
return None
# Given a QTextCursor that selects a Key (typically an Anchor)
# return the class of the Key.
def classOfRefKey(tc):
return classOfKey(tc.selectedText())
# Given a QTextCursor that selects a Note, return the note's key.
# We assume that tc really selects a Note so that noteFinderRE will
# definitely hit so we don't check its return. All we want is its cap(1).
def keyFromNote(tc):
NoteFinderRE.indexIn(tc.selectedText())
return NoteFinderRE.cap(1)
# Given a QTextCursor that selects a Note, return the class of its key.
def classOfNoteKey(tc):
return classOfKey(keyFromNote(tc))
# Given a QTextCursor that selects a Note, return the leading characters,
# truncated at 40 chars, from the Note.
MaxNoteText = 40
def textFromNote(tc):
qs = QString()
if tc is not None:
qs = tc.selectedText()
if MaxNoteText < qs.size() :
qs.truncate(MaxNoteText-3)
qs.append(u'...')
return qs
# The following is the database for the table of footnotes.
# This is empty on startup and after the DocHasChanged signal, then built
# by the Refresh button.
TheFootnoteList = [ ]
TheCountOfUnpairedKeys = 0
TheEditCountAtRefresh = -1
# Make a database item given ref and note cursors as available.
# Note we copy the text cursors so the caller doesn't have to worry about
# overwriting, reusing, or letting them go out of scope afterward.
def makeDBItem(reftc,notetc):
keyqs = reftc.selectedText() if reftc is not None else keyFromNote(notetc)
item = {'K': keyqs,
'C': classOfKey(keyqs),
'R': QTextCursor(reftc) if reftc is not None else None,
'N': QTextCursor(notetc) if notetc is not None else None
}
return item
# Append a new matched footnote to the end of the database, given the
# cursors for the Anchor and the Note. It is assumed this is called on
# a top-to-bottom sequential scan so entries will be added in line# sequence.
def addMatchedPair(reftc,notetc):
global TheFootnoteList
TheFootnoteList.append(makeDBItem(reftc,notetc))
# insert an unmatched reference into the db in ref line number sequence.
# unmatched refs and notes are expected to be few, so a sequential scan is ok.
def insertUnmatchedRef(reftc):
global TheFootnoteList
item = makeDBItem(reftc,None)
j = refLineNumber(reftc)
for i in range(len(TheFootnoteList)):
if j <= refLineNumber(TheFootnoteList[i]['R']) :
TheFootnoteList.insert(i,item)
return
TheFootnoteList.append(item) # unmatched ref after all other refs
# insert an unmatched note in note line number sequence.
def insertUnmatchedNote(notetc):
global TheFootnoteList
item = makeDBItem(None,notetc)
j = noteLineNumber(notetc)
for i in range(len(TheFootnoteList)):
if j <= noteLineNumber(notetc) :
TheFootnoteList.insert(i,item)
return
# Based on the above spadework, do the Refresh operation
def theRealRefresh():
global TheFootnoteList, TheCountOfUnpairedKeys, TheEditCountAtRefresh
TheFootnoteList = [] # wipe the slate
TheCountOfUnpairedKeys = 0
TheEditCountAtRefresh = IMC.editCounter
doc = IMC.editWidget.document() # get handle of document
# initialize status message and progress bar
barCount = doc.characterCount()
pqMsgs.startBar(barCount * 2,"Scanning for notes and anchors")
barBias = 0
# scan the document from top to bottom finding Anchors and make a
# list of them as textcursors. doc.find(re,pos) returns a textcursor
# that .isNull on no hit.
listOrefs = []
findtc = QTextCursor(doc) # cursor that points to top of document
findtc = doc.find(RefFinderRE,findtc)
while not findtc.isNull() : # while we keep finding things
# findtc now selects the whole anchor [xx] but we want to only
# select the key. This means incrementing the anchor and decrementing
# the position; the means to do this are a bit awkward.
a = findtc.anchor()+1
p = findtc.position()-1
findtc.setPosition(a,QTextCursor.MoveAnchor) #click..
findtc.setPosition(p,QTextCursor.KeepAnchor) #..and drag
# The anchor could have been an [oe] character, don't save if so.
if findtc.selectedText().compare(TheDreadedOE, Qt.CaseInsensitive):
listOrefs.append(QTextCursor(findtc))
pqMsgs.rollBar(findtc.position())
findtc = doc.find(RefFinderRE,findtc) # look for the next
barBias = barCount
pqMsgs.rollBar(barBias)
# scan the document again top to bottom now looking for Notes, and make
# a list of them as textcursors.
listOnotes = []
findtc = QTextCursor(doc) # cursor that points to top of document
findtc = doc.find(NoteFinderRE,findtc)
while not findtc.isNull():
# findtc selects "[Footnote key:" now we need to find the closing
# right bracket, which must be at the end of its line. We will go
# by text blocks looking for a line that ends like this]
pqMsgs.rollBar(findtc.anchor()+barBias)
while True:
# "drag" to end of line, selecting whole line
findtc.movePosition(QTextCursor.EndOfBlock,QTextCursor.KeepAnchor)
if findtc.selectedText().endsWith(u']') :
break # now selecting whole note
if findtc.block() == doc.lastBlock() :
# ran off end of document looking for ...]
findtc.clearSelection() # just forget this note, it isn't a note
break # we could tell user, unterminated note. eh.
else: # there is another line, step to its head and look again
findtc.movePosition(QTextCursor.NextBlock,QTextCursor.KeepAnchor)
if findtc.hasSelection() : # we did find the line ending in ]
listOnotes.append(QTextCursor(findtc))
findtc = doc.find(NoteFinderRE,findtc) # find next, fail at end of doc
# Now, listOrefs is all the Anchors and listOnotes is all the Notes,
# both in sequence by document position. Basically, merge these lists.
# For each Ref in sequence, find the first Note with a matching key at
# a higher line number. If there is one, add the matched pair to the db,
# and delete the note from its list. If there is no match, copy the
# ref to a list of unmatched refs (because we can't del from the listOrefs
# inside the loop over it).
# This is not an MxN process despite appearances, as (a) most refs
# will find a match, (b) most matches appear quickly and (c) we keep
# shortening the list of notes.
listOfOrphanRefs = []
for reftc in listOrefs:
hit = False
refln = refLineNumber(reftc) # note line number for comparison
for notetc in listOnotes:
if 0 == reftc.selectedText().compare(keyFromNote(notetc)) and \
refln < noteLineNumber(notetc) :
hit = True
break
if hit : # a match was found
addMatchedPair(reftc,notetc)
listOnotes.remove(notetc)
else:
listOfOrphanRefs.append(reftc)
# All the matches have been made (in heaven?). If there remain any
# unmatched refs or notes, insert them in the db as well.
for reftc in listOfOrphanRefs:
insertUnmatchedRef(reftc)
for notetc in listOnotes:
insertUnmatchedNote(notetc)
TheCountOfUnpairedKeys = len(listOfOrphanRefs)+len(listOnotes)
# clear the status and progress bar
pqMsgs.endBar()
# -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
# This code implements the Fnote table and its interactions.
# -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
# Implement a concrete table model by subclassing Abstract Table Model.
# The data served is derived from the TheFootnoteList, above.
class myTableModel(QAbstractTableModel):
def __init__(self, parent=None):
super(myTableModel, self).__init__(parent)
# The header texts for the columns
self.headerDict = {
0:"Key", 1:"Class", 2:"Ref line", 3:"Note Line", 4:"Length", 5:"Text"
}
# the text alignments for the columns
self.alignDict = { 0:Qt.AlignCenter, 1: Qt.AlignCenter,
2: Qt.AlignRight, 3: Qt.AlignRight,
4: Qt.AlignRight, 5: Qt.AlignLeft }
# The values for tool/status tips for data and headers
self.tipDict = { 0: "Actual key text",
1: "Assumed class of key for renumbering",
2: "Line number of the Anchor",
3: "First line number of the Note",
4: "Number of lines in the Note",
5: "Initial text of the Note"
}
# The brushes to painting the background of good and questionable rows
self.whiteBrush = QBrush(QColor(QString('transparent')))
self.pinkBrush = QBrush(QColor(QString('lightpink')))
self.greenBrush = QBrush(QColor(QString('palegreen')))
# Here save the expansion of one database item for convenient fetching
self.lastRow = -1
self.lastTuple = ()
self.brushForRow = QBrush()
def columnCount(self,index):
if index.isValid() : return 0 # we don't have a tree here
return 6
def flags(self,index):
f = Qt.ItemIsEnabled
#if index.column() ==1 :
#f |= Qt.ItemIsEditable # column 1 only editable
return f
def rowCount(self,index):
if index.isValid() : return 0 # we don't have a tree here
return len(TheFootnoteList) # initially 0
def headerData(self, col, axis, role):
if (axis == Qt.Horizontal) and (col >= 0):
if role == Qt.DisplayRole : # wants actual text
return QString(self.headerDict[col])
elif (role == Qt.ToolTipRole) or (role == Qt.StatusTipRole) :
return QString(self.tipDict[col])
return QVariant() # we don't do that, whatever it is
# This method is called whenever the table view wants to know practically
# anything about the visible aspect of a table cell. The row & column are
# in the index, and what it wants to know is expressed by the role.
def data(self, index, role ):
# whatever it wants, we need the row data. Get it into self.lastTuple
if index.row() != self.lastRow :
# We assume Qt won't ask for any row outside 0..rowCount-1.
# We TRUST it will go horizontally, hitting a row multiple times,
# before going on to the next row.
r = index.row()
rtc = TheFootnoteList[r]['R']
ntc = TheFootnoteList[r]['N']
rln = refLineNumber(rtc)
nln = noteLineNumber(ntc)
nll = noteLineLength(ntc) # None if ntc is None
self.lastTuple = (
TheFootnoteList[r]['K'], # key as a qstring
KeyClassNames[TheFootnoteList[r]['C']], # class as qstring
QString(unicode(rln)) if rtc is not None else QString("?"),
QString(unicode(nln)) if ntc is not None else QString("?"),
QString(unicode(nll)),
textFromNote(ntc)
)
self.brushForRow = self.whiteBrush
if (rtc is None) or (ntc is None):
self.brushForRow = self.pinkBrush
elif 10 < nll or 50 < (nln-rln) :
self.brushForRow = self.greenBrush
# Now, what was it you wanted?
if role == Qt.DisplayRole : # wants actual data
return self.lastTuple[index.column()] # so give it.
elif (role == Qt.TextAlignmentRole) :
return self.alignDict[index.column()]
elif (role == Qt.ToolTipRole) or (role == Qt.StatusTipRole) :
return QString(self.tipDict[index.column()])
elif (role == Qt.BackgroundRole) or (role == Qt.BackgroundColorRole):
return self.brushForRow
# don't support other roles
return QVariant()
# -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
# This code creates the Fnote panel and implements the other UI widgets.
# -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
# Used during renumbering: given an integer, return an upper- or
# lowercase roman numeral. Cribbed from Mark Pilgrim's "Dive Into Python".
RomanNumeralMap = (('M', 1000),
('CM', 900),
('D', 500),
('CD', 400),
('C', 100),
('XC', 90),
('L', 50),
('XL', 40),
('X', 10),
('IX', 9),
('V', 5),
('IV', 4),
('I', 1))
def toRoman(n,lc):
"""convert integer to Roman numeral"""
if not (0 < n < 5000):
raise ValueError, "number out of range (must be 1..4999)"
if int(n) <> n:
raise ValueError, "decimals can not be converted"
result = ""
for numeral, integer in RomanNumeralMap:
while n >= integer:
result += numeral
n -= integer
qs = QString(result)
if lc : return qs.toLower()
return qs
AlphaMap = u'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
def toAlpha(n,lc=False):
'''convert integer to alpha A..ZZZ (nb. 26**3 == 17577'''
if not (0 < n < 17577):
raise ValueError, "number out of range (must be 1..17577)"
if int(n) <> n:
raise ValueError, "decimals can not be converted"
result = ''
while True :
(n,m) = divmod(n-1,26)
result = AlphaMap[m]+result
if n == 0 : break
qs = QString(result)
if lc : return qs.toLower()
return qs
class fnotePanel(QWidget):
def __init__(self, parent=None):
super(fnotePanel, self).__init__(parent)
# Here we go making a layout. The outer shape is a vbox.
mainLayout = QVBoxLayout()
self.setLayout(mainLayout)
# The following things are stacked inside the vbox.
# 1, the Refresh button, left-justifed in an hbox.
refreshLayout = QHBoxLayout()
self.refreshButton = QPushButton("Refresh")
refreshLayout.addWidget(self.refreshButton,0)
refreshLayout.addStretch(1) # stretch on right left-aligns the button
mainLayout.addLayout(refreshLayout)
self.connect(self.refreshButton, SIGNAL("clicked()"), self.doRefresh)
# 2, The table of footnotes, represented as a QTableView that displays
# our myTableModel.
self.view = QTableView()
self.view.setCornerButtonEnabled(False)
self.view.setWordWrap(False)
self.view.setAlternatingRowColors(False)
self.view.setSortingEnabled(False)
mainLayout.addWidget(self.view,1) # It gets all stretch for the panel
# create the table (empty just now) and display it
self.table = myTableModel() #
self.view.setModel(self.table)
# Connect the table view's clicked to our clicked slot
self.connect(self.view, SIGNAL("clicked(QModelIndex)"), self.tableClick)
# 3, an hbox containing 3 vboxes each containing 2 hboxes... ok, let's
# start with 6 comboboxes, one for each class.
self.pickIVX = self.makeStreamMenu(KeyClass_ABC) # initialize both IVX
self.pickABC = self.makeStreamMenu(KeyClass_ABC) # ..and ABC to A,B
self.pickivx = self.makeStreamMenu(KeyClass_abc) # similarly
self.pickabc = self.makeStreamMenu(KeyClass_abc)
self.pick123 = self.makeStreamMenu(KeyClass_123)
self.picksym = self.makeStreamMenu()
# while we are at it let us connect their signals to the methods
# that enforce their behavior.
self.connect(self.pickIVX, SIGNAL("activated(int)"),self.IVXpick)
self.connect(self.pickABC, SIGNAL("activated(int)"),self.ABCpick)
self.connect(self.pickivx, SIGNAL("activated(int)"),self.ivxpick)
self.connect(self.pickabc, SIGNAL("activated(int)"),self.abcpick)
# Now make 6 hboxes each containing a label and the corresponding
# combobox.
hbIVX = self.makePair(KeyClassNames[0],self.pickIVX)
hbABC = self.makePair(KeyClassNames[1],self.pickABC)
hbivx = self.makePair(KeyClassNames[2],self.pickivx)
hbabc = self.makePair(KeyClassNames[3],self.pickabc)
hb123 = self.makePair(KeyClassNames[4],self.pick123)
hbsym = self.makePair(KeyClassNames[5],self.picksym)
# Stack up the pairs in three attractive vboxes
vbIA = self.makeStack(hbABC,hbIVX)
vbia = self.makeStack(hbabc,hbivx)
vbns = self.makeStack(hb123,hbsym)
# Array them across a charming hbox and stick it in our panel
hbxxx = QHBoxLayout()
hbxxx.addLayout(vbIA)
hbxxx.addLayout(vbia)
hbxxx.addLayout(vbns)
hbxxx.addStretch(1)
mainLayout.addLayout(hbxxx)
# Finally, the action buttons on the bottom in a frame.
doitgb = QGroupBox("Actions")
doithb = QHBoxLayout()
self.renumberButton = QPushButton("Renumber")
self.moveButton = QPushButton("Move Notes")
self.asciiButton = QPushButton("ASCII Cvt")
self.htmlButton = QPushButton("HTML Cvt")
doithb.addWidget(self.renumberButton,0)
doithb.addStretch(1)
doithb.addWidget(self.moveButton)
doithb.addStretch(1)
doithb.addWidget(self.asciiButton)
doithb.addStretch(1)
doithb.addWidget(self.htmlButton)
doitgb.setLayout(doithb)
mainLayout.addWidget(doitgb)
# and connect the buttons to actions
self.connect(self.renumberButton, SIGNAL("clicked()"), self.doRenumber)
self.connect(self.moveButton, SIGNAL("clicked()"), self.doMove)
self.connect(self.asciiButton, SIGNAL("clicked()"), self.doASCII)
self.connect(self.htmlButton, SIGNAL("clicked()"), self.doHTML)
# The renumber streams and a set of lambdas for getting the
# next number in sequence from them. The lambdas are selected by the
# value in a stream menu combo box, 0-4 or 5 meaning no-renumber.
self.streams = [0,0,0,0,0,0]
self.streamLambdas = [
lambda s : toRoman(s,False),
lambda s : toAlpha(s,False),
lambda s : toRoman(s,True),
lambda s : toAlpha(s,True),
lambda s : QString(unicode(s)),
lambda s : None]
self.streamMenuList = [
self.pickIVX,self.pickABC,self.pickivx,
self.pickabc,self.pick123,self.picksym]
# Note a count of items over which it is worthwhile to run a
# progress bar during renumber, move, etc. Reconsider later: 100? 200?
self.enoughForABar = 100
# Convenience function to shorten code when instantiating
def makeStreamMenu(self,choice=5):
cb = QComboBox()
cb.addItems(StreamNames)
cb.setCurrentIndex(choice)
return cb
# Convenience function to shorten code when instantiating
def makePair(self,qs,cb):
hb = QHBoxLayout()
hb.addWidget(QLabel(qs))
hb.addWidget(cb)
hb.addStretch(1)
return hb
# Convenience function to shorten code when instantiating
def makeStack(self,pair1,pair2):
vb = QVBoxLayout()
vb.addLayout(pair1)
vb.addLayout(pair2)
vb.addStretch(1)
return vb
# The slot for a click of the Refresh button. Tell the table model we are
# changing stuff; then call theRealRefresh; then tell table we're good.
def doRefresh(self):
self.table.beginResetModel()
theRealRefresh()
self.table.endResetModel()
self.view.resizeColumnsToContents()
# These slots are invoked when a choice is made in the stream popup menu
# for an ambiguous class, to ensure that contradictory choices aren't made.
# If the user sets the IVX stream to the same as the ABC stream, or
# to no-renumber, fine. Otherwise, she is asserting that she has valid
# IVX footnote keys, in which case ABC needs to be no-renumber.
def IVXpick(self,pick):
if (pick != self.pickABC.currentIndex()) or (pick != 5) :
self.pickABC.setCurrentIndex(5)
# If the user sets the ABC stream to anything but no-renumber, she is
# asserting that there are valid ABC keys in which case, keys we have
# classed as IVX need to use the same stream.
def ABCpick(self,pick):
if pick != 5 :
self.pickIVX.setCurrentIndex(pick)
# And similarly for lowercase.
def ivxpick(self,pick):
if (pick != self.pickabc.currentIndex()) or (pick != 5) :
self.pickabc.setCurrentIndex(5)
def abcpick(self,pick):
if pick != 5 :
self.pickivx.setCurrentIndex(pick)
# The slot for a click anywhere in the tableview. If the click is on:
# * column 0 or 1 (key or class) we jump to the ref line, unless we are on
# the ref line in which case we jump to the note line (ping-pong).
# * column 2 (ref line) we jump to the ref line.
# * column 3, 4, 5 (note line or note) we jump to the note line.
# In each case, to "jump" means, set the document cursor to the reftc
# or the notetc, making the ref or note the current selection.
def tableClick(self,index):
r = index.row()
c = index.column()
dtc = IMC.editWidget.textCursor()
rtc = TheFootnoteList[r]['R']
ntc = TheFootnoteList[r]['N']
targtc = None
if c > 2 : # column 3 4 or 5
targtc = ntc
elif c == 2 :
targtc = rtc
else: # c == 0 or 1
dln = dtc.blockNumber()
rln = refLineNumber(rtc) # None, if rtc is None
if dln == rln : # True if there is a ref line and we are on it
targtc = ntc # go to the note
else: # there isn't a ref line (rtc is None) or there is and we want it
targtc = rtc
if targtc is not None:
IMC.editWidget.setTextCursor(targtc)
IMC.editWidget.centerCursor()
# The slots for the main window's docWill/HasChanged signals.
# Right now, just clear the footnote database, the user can hit
# refresh when he wants the info. If the refresh proves to be
# very small performance hit even in a very large book, we could
# look at calling doRefresh automatically after docHasChanged.
def docWillChange(self):
self.table.beginResetModel()
def docHasChanged(self):
global TheFootnoteList
TheFootnoteList = []
self.table.endResetModel()
# Subroutine to make sure it is ok to do a major revision such as renumber or move.
# First, if the document has changed since the last time we did a refresh, do one
# Second, if there are then any unpaired keys, display a message and return false.
def canWeRevise(self,action):
global TheCountOfUnpairedKeys, TheEditCountAtRefresh
if TheEditCountAtRefresh != IMC.editCounter :
self.doRefresh()
if TheCountOfUnpairedKeys is not 0 :
pqMsgs.warningMsg(
"Cannot {0} with orphan notes and anchors".format(action),
"The count of unpaired anchors and notes is: {0}".format(TheCountOfUnpairedKeys)
)
return False # dinna do it, laddie!
return True # ok to go ahead
# The slot for the Renumber button. Check to see if any unpaired keys and
# don't do it if there are any. But if all are matched, go through the
# database top to bottom (because that is the direction the user expects
# the number streams to increment). For each key, develop a new key string
# based on its present class and the stream selection for that class.
def doRenumber(self):
global TheFootnoteList
if not self.canWeRevise(u"Renumber") :
return
# If the database is actually empty, just do nothing.
dbcount = len(TheFootnoteList)
if dbcount < 1 : return
# OTOH, if there is significant work to do, start the progress bar.
if dbcount >= self.enoughForABar :
pqMsgs.startBar(dbcount,"Renumbering footnotes...")
# clear the number streams
self.streams = [0,0,0,0,0,0]
# Tell the table model that things are gonna change
self.table.beginResetModel()
# create a working cursor and start an undo macro on it.
worktc = QTextCursor(IMC.editWidget.textCursor())
worktc.beginEditBlock()
for i in range(dbcount) : # there's a reason this isn't "for item in..."
item = TheFootnoteList[i]
# Note this key's present string value and class number.
oldkeyqs = item['K']
oldclass = item['C']
# Get the renumber stream index for the present class
renchoice = self.streamMenuList[oldclass].currentIndex()
# Increment that stream (if no-renumber, increment is harmless)
self.streams[renchoice] += 1
# Format the incremented value as a string based on stream choice
# This produces None if renchoice is 5, no-renumber. It could produce
# a value error on a too-big roman numeral or other unlikely things.
try :
newkeyqs = self.streamLambdas[renchoice](self.streams[renchoice])
# If that produced an alpha oe or OE, skip that value
if 0 == newkeyqs.compare(TheDreadedOE, Qt.CaseInsensitive) :
self.streams[renchoice] += 1
newkeyqs = self.streamLambdas[renchoice](self.streams[renchoice])
except ValueError, errmsg :
pqMsgs.warningMsg(
"Error encoding {0} key stream".format(KeyClassNames[renchoice]),
"Numbers will be wrong, recommend Undo when operation ends"
)
self.streams[renchoice] = 0 # restart that stream
newkeyqs = self.streamLambdas[renchoice](self.streams[renchoice])
if newkeyqs is not None: # not no-renumber, so we do it
# infer the key class of the new key string
newclass = classOfKey(newkeyqs)
# ## Replace the key in the note text:
# First, make a pattern to match the old key. Do it by making
# a COPY of the old key and appending : to the COPY. We need
# the colon because the key text might be e.g. "o" or "F".
targqs = QString(oldkeyqs).append(u':')
# Cause worktc to select the opening text of the note through
# the colon, from notetc. Don't select the whole note as we will
# use QString::replace which replaces every match it finds.
notetc = item['N']
worktc.setPosition(notetc.anchor())
worktc.setPosition(notetc.anchor()+10+targqs.size(),QTextCursor.KeepAnchor)
# Get that selected text as a QString
workqs = worktc.selectedText()
# Find the offset of the old key (s.b. 10 but not anal about spaces)
targix = workqs.indexOf(targqs,0,Qt.CaseSensitive)
# Replace the old key text with the new key text
workqs.replace(targix,oldkeyqs.size(),newkeyqs)
# put the modified text back in the document, replacing just
# [Footnote key:. Even this will make Qt mess with the anchor
# and position of notetc, so set up to recreate that.
selstart = notetc.anchor()
selend = notetc.position()-oldkeyqs.size()+newkeyqs.size()
worktc.insertText(workqs)
notetc.setPosition(selstart)
notetc.setPosition(selend,QTextCursor.KeepAnchor)
# ## Replace the key in the anchor, a simpler job, although
# we again have to recover the selection
reftc = item['R']
selstart = reftc.anchor()
sellen = newkeyqs.size()
worktc.setPosition(selstart)
worktc.setPosition(reftc.position(),QTextCursor.KeepAnchor)
worktc.insertText(newkeyqs)
reftc.setPosition(selstart)
reftc.setPosition(selstart+sellen,QTextCursor.KeepAnchor)
# Update the database item. The two cursors are already updated.
# Note that this is Python; "item" is a reference to
# TheFootnoteList[i], ergo we are updating the db in place.
item['K'] = newkeyqs
item['C'] = newclass
# end of "newkeyqs is not None"
if dbcount >= self.enoughForABar and 0 == (i & 7):
pqMsgs.rollBar(i)
# end of "for i in range(dbcount)"
# Clean up:
worktc.endEditBlock() # End the undo macro
self.table.endResetModel() # tell the table the data have stabilized
if dbcount > self.enoughForABar :
pqMsgs.endBar() # clear the progress bar
# The slot for the Move button. Check to see if any unpaired keys and
# don't do it if there are any. But if all are matched, first find all
# footnote sections in the document and make a list of them in the form
# of textcursors. Get user permission, showing section count as a means
# of validating markup, then move each note that is not in a section,
# into the section next below it. Update the note cursors in the db.
def doMove(self):
global TheFootnoteList
if not self.canWeRevise(u"Move Notes to /F..F/") :
return
# If the database is actually empty, just do nothing.
dbcount = len(TheFootnoteList)
if dbcount < 1 : return
# Create a working text cursor.
worktc = QTextCursor(IMC.editWidget.textCursor())
# Search the whole document and find the /F..F/ sections. We could look
# for lines starting /F and then after finding one, for the F/ line, but
# the logic gets messy when the user might have forgotten or miscoded
# the F/. So we use a regex that will cap(0) the entire section. We are
# not being Mr. Nice Guy and allowing \s* spaces either, it has to be
# zackly \n/F\n.*\nF/\n.
sectRegex = QRegExp(u'\\n/F.*\\nF/(\\n|$)')
sectRegex.setMinimal(True) # minimal match for the .* above
sectRegex.setCaseSensitivity(Qt.CaseSensitive)
wholeDocQs = IMC.editWidget.toPlainText() # whole doc as qstring
sectList = []
j = sectRegex.indexIn(wholeDocQs,0)
while j > -1:
# initialize text cursors to record the start and end positions
# of each section. Note, cursors point between characters:
# sectcA----v
# sectcI----v sectcB---v
# ... \2029 / F \2029 ..whatever.. \2029 F / \2029
# Notes are inserted at sectcI which is moved ahead each time. Qt
# takes care of updating sectcB and other cursors on inserts.
# The line number of sectcA is that of the first line after /F,
# and that of sectcB is that of the F/ for comparison.
sectcA = QTextCursor(worktc)
sectcA.setPosition(j+4)
sectcB = QTextCursor(worktc)
sectcB.setPosition(j+sectRegex.matchedLength()-3)
sectcI = QTextCursor(sectcA)
sectList.append( (sectcA,sectcI,sectcB) )
j = sectRegex.indexIn(wholeDocQs,j+1)
# Let wholeDocQs go out of scope just in case it is an actual copy
# of the document. (Should just be a const reference but who knows?)
wholeDocQs = QString()
# Did we in fact find any footnote sections?
if len(sectList) == 0:
pqMsgs.warningMsg(u"Found no /F..F/ footnote sections.")
return
# Since this is a big deal, and /F is easy to mis-code, let's show
# the count found and get approval.
if not pqMsgs.okCancelMsg(
u"Found {0} footnote sections".format(len(sectList)),
"OK to proceed with the move?") :
return
# Right, we're gonna do stuff. If there is significant work to do,
# start the progress bar.
if dbcount >= self.enoughForABar :
pqMsgs.startBar(dbcount,"Moving Notes to /F..F/ sections")
# Tell the table model that things are gonna change
self.docWillChange()
# Start an undo macro on the working cursor.
worktc = QTextCursor(IMC.editWidget.textCursor())
worktc.beginEditBlock()
# loop over all notes.
for i in range(dbcount):
notetc = TheFootnoteList[i]['N']
nln = noteLineNumber(notetc)
# Look for the first section whose last line is below nln
for s in range(len(sectList)):
(sectcA,sectcI,sectcB) = sectList[s]
if nln >= refLineNumber(sectcB):
# this note starts below this section s
continue # to next s
# this note starts above the end of this section,
if nln >= refLineNumber(sectcA):
# however this note is inside this section already
break # and move on to next i
# this note is above, and not within, the section sectList[s],
# so do the move. Start saving the length of the note as
# currently known.
notelen = notetc.position() - notetc.anchor()
# Modify the note selection to include both the \2029 that
# precedes the note and the \2029 that follows the right bracket.
# This assumes that a note is not at the exact beginning of a document
# (seems safe enough) and not at the end either (the /F follows it).
new_anchor = notetc.anchor() - 1
new_position = notetc.position() + 1
notetc.setPosition(new_anchor)
notetc.setPosition(new_position,QTextCursor.KeepAnchor)
# point our worktc at the insertion point in this section
worktc.setPosition(sectcI.position())
# copy the note text inserting it in the section
worktc.insertText(notetc.selectedText())
# save the ending position as the new position of sectcI -- the
# next inserted note goes there
sectcI.setPosition(worktc.position())
# clear the old note text. Have to do this using worktc for
# the undo-redo macro to record it. When the text is removed,
# Qt adjusts all cursors that point below it, including sectcI.
worktc.setPosition(notetc.anchor())
worktc.setPosition(notetc.position(),QTextCursor.KeepAnchor)
worktc.removeSelectedText()
# reset notetc to point to the new note location
notepos = sectcI.position()-notelen-1
notetc.setPosition(notepos)
notetc.setPosition(notepos+notelen,QTextCursor.KeepAnchor)
break # all done scanning sectList for this note
# end of "for s in range(len(sectList))"
if dbcount >= self.enoughForABar and 0 == (i & 7) :
pqMsgs.rollBar(i)
# end of "for i in range(dbcount)"
# Clean up:
worktc.endEditBlock() # End the undo macro
theRealRefresh() # fix up the line numbers in the table
self.docHasChanged() # tell the table the data has stabilized
if dbcount > self.enoughForABar :
pqMsgs.endBar() # clear the progress bar
# The slot for the HTML button. Make sure the db is clean and there is work
# to do. Then go through each item and update as follows:
# Around the anchor put:
# <a id='FA_key' href='#FN_key' class='fnanchor'>[key]</a>
# Replace "[Footnote key:" with
# <div class='footnote' id='FN_key'>\n
# <span class="fnlabel"><a href='FA_key'>[key]</a></span> text..
# Replace the final ] with \n\n</div>
# The idea is that the HTML conversion in pqFlow will see the \n\n
# and insert <p> and </p> as usual.
# We work the list from the bottom up because of nested references.
# Going top-down, we would rewrite a Note containing an Anchor, and
# that unavoidably messes up the reftc pointing to that Anchor.
# Going bottom-up, we rewrite the nested Anchor before we rewrite the
# Note that contains it.
def doHTML(self):
global TheFootnoteList
if not self.canWeRevise(u"Convert Footnotes to HTML") :
return
# If the database is actually empty, just do nothing.
dbcount = len(TheFootnoteList)
if dbcount < 1 : return
# Just in case the user had a spastic twitch and clicked in error,
if not pqMsgs.okCancelMsg(
"Going to convert {0} footnotes to HTML".format(dbcount),
"Note Symbol class keys are skipped."):
return
# Set up a boilerplate string for the Anchor replacements.
# Each %n placeholder is replaced by a copy of the key value.
anchor_pattern = QString(u"<a name='FA_%1' id='FA_%2' href='#FN_%3' class='fnanchor'>[%4]</a>")
# Set up a regex pattern to recognize [Footnote key:, being forgiving
# about extra spaces and absorbing spaces after the colon.
# %1 is replaced by the key value.
fnt_pattern = QString(u"\[Footnote\s+%1\s*:\s*")
fnt_RE = QRegExp()
# Set up a replacement boilerplate for [Footnote key.
# Each %n placeholder is replaced by a copy of the key value.
fnt_rep = QString(u"<div class='footnote' id='FN_%1'>\u2029<span class='fnlabel'><a href='#FA_%2'>[%3]</a></span> ")
# Make a working textcursor, start the undo macro, advise the table
worktc = QTextCursor(IMC.editWidget.textCursor())
worktc.beginEditBlock()
self.docWillChange()
if dbcount >= self.enoughForABar :
pqMsgs.startBar(dbcount,"Converting notes to HTML...")
for i in reversed(range(dbcount)):
item = TheFootnoteList[i]
# Don't even try to convert symbol-class keys
if item['C'] == KeyClass_sym :
continue
key_qs = item['K'] # Key value as qstring
key_tc = item['R'] # cursor that selects the key
# note the start position of the anchor, less 1 to include the [
anchor_start = key_tc.anchor() - 1
# note the anchor end position, plus 1 for the ]
anchor_end = key_tc.position() + 1
# Copy the anchor boilerplate and install the key in it
anchor_qs = anchor_pattern.arg(key_qs,key_qs,key_qs,key_qs)
# Replace the anchor text, using the work cursor.
worktc.setPosition(anchor_start)
worktc.setPosition(anchor_end,QTextCursor.KeepAnchor)
worktc.insertText(anchor_qs)
# Note the start position of the note
note_tc = item['N']
note_start = note_tc.anchor()
# Note its end position, which includes the closing ]
note_end = note_tc.position()
# Copy the note boilerplates and install the key in them.
note_pattern = fnt_pattern.arg(key_qs)
note_qs = fnt_rep.arg(key_qs,key_qs,key_qs)
# Point the work cursor at the note.
worktc.setPosition(note_start)
worktc.setPosition(note_end,QTextCursor.KeepAnchor)
# get the note as a string, truncate the closing ],
# append </div> on a separate line, and put it back.
oldnote = worktc.selectedText()
oldnote.chop(1)
oldnote.append(QString(u"\u2029</div>"))
worktc.insertText(oldnote) # worktc now positioned after note
# use the note string to recognize the length of [Footnote key:sp
fnt_RE.setPattern(note_pattern)
j = fnt_RE.indexIn(oldnote) # assert j==0
j = fnt_RE.cap(0).size() # size of the target portion
# set the work cursor to select just that, and replace it.
worktc.setPosition(note_start)
worktc.setPosition(note_start + j,QTextCursor.KeepAnchor)
worktc.insertText(note_qs)
if dbcount >= self.enoughForABar and 0 == (i & 7):
pqMsgs.rollBar(dbcount - i)
# end of "for i in range(dbcount)"
# Clean up:
worktc.endEditBlock() # End the undo macro
self.docHasChanged() # tell the table the data has stabilized
if dbcount > self.enoughForABar :
pqMsgs.endBar() # clear the progress bar
# The slot for the ASCII button. Make sure the db is clean and there is work
# to do. Then go through all Notes (the Anchors are left alone)
# and update all Notes as follows:
# Replace "[Footnote key:" with /Q Fnote XXX\n [key]
# where XXX is the KeyClassName, e.g. ABC or ivx.
# Replace the final ] with \nQ/\n
# The idea is to change a footnote into a block quote tagged with the class
# which is ignored by reflow, but can be used to do find/replace.
def doASCII(self):
global TheFootnoteList, KeyClassNames
if not self.canWeRevise(u"Convert Footnotes to /Q..Q/") :
return
# If the database is actually empty, just do nothing.
dbcount = len(TheFootnoteList)
if dbcount < 1 : return
# Just in case the user had a spastic twitch and clicked in error,
if not pqMsgs.okCancelMsg(
"Going to convert {0} footnotes to /Q..Q/".format(dbcount),
""):
return
# Set up a regex pattern to recognize [Footnote key: being forgiving
# about extra spaces and absorbing spaces after the colon. The %1
# marker is replaced in a QString.arg() operation with the key value.
fnt_pattern = QString(u"\[Footnote\s+%1\s*:\s*")
fnt_RE = QRegExp()
# Set up a replacement boilerplate for [Footnote key. Here %1 is
# replaced with the key classname and %2 with the key value.
fnt_rep = QString(u"/Q FNote %1\u2029 [%2] ")
# Make a working textcursor, start the undo macro, advise the table
worktc = QTextCursor(IMC.editWidget.textCursor())
worktc.beginEditBlock()
self.docWillChange()
if dbcount >= self.enoughForABar :
pqMsgs.startBar(dbcount,"Converting notes to ASCII...")
for i in range(dbcount):
item = TheFootnoteList[i]
key_qs = item['K']
# Get the cursor that selects the Note.
note_tc = item['N']
# Record the start position of the note
note_start = note_tc.anchor()
# Record its end position, which includes the closing ]
note_end = note_tc.position()
# Copy the regex pattern with the actual key in it.
note_pat = fnt_pattern.arg(key_qs)
# Copy the replacement string with the keyclass and key in it
note_qs = fnt_rep.arg(KeyClassNames[item['C']]).arg(key_qs)
# Point the work cursor at the note.
worktc.setPosition(note_start)
worktc.setPosition(note_end,QTextCursor.KeepAnchor)
# get the note as a string, truncate the closing ], add the
# newline and Q/, and put it back.
oldnote = worktc.selectedText()
oldnote.chop(1)
oldnote.append(QString(u'\u2029Q/'))
worktc.insertText(oldnote) # worktc now positioned after note
# use the note string to recognize the length of [Footnote key:sp
fnt_RE.setPattern(note_pat)
j = fnt_RE.indexIn(oldnote) # assert j==0
j = fnt_RE.cap(0).size() # size of the target portion
# set the work cursor to select just that, and replace it.
worktc.setPosition(note_start)
worktc.setPosition(note_start + j,QTextCursor.KeepAnchor)
worktc.insertText(note_qs)
if dbcount >= self.enoughForABar and 0 == (i & 7):
pqMsgs.rollBar(i)
# end of "for i in range(dbcount)"
# Clean up:
worktc.endEditBlock() # End the undo macro
self.docHasChanged() # tell the table the data has stabilized
if dbcount > self.enoughForABar :
pqMsgs.endBar() # clear the progress bar
if __name__ == "__main__":
def docEdit():
IMC.editCounter += 1
import sys
from PyQt4.QtCore import (Qt,QFile,QIODevice,QTextStream,QSettings)
from PyQt4.QtGui import (QApplication,QPlainTextEdit,QFileDialog,QMainWindow)
import pqIMC
app = QApplication(sys.argv) # create an app
IMC = pqIMC.tricorder() # set up a fake IMC for unit test
IMC.fontFamily = QString("Courier")
import pqMsgs
pqMsgs.IMC = IMC
IMC.editWidget = QPlainTextEdit()
IMC.editWidget.setFont(pqMsgs.getMonoFont())
IMC.settings = QSettings()
IMC.editCounter = 0
widj = fnotePanel()
MW = QMainWindow()
MW.setCentralWidget(widj)
pqMsgs.makeBarIn(MW.statusBar())
MW.connect(IMC.editWidget, SIGNAL("textChanged()"), docEdit)
MW.show()
utqs = QString('''
This is text[A] with two anchors one at end of line.[2]
[Footnote A: footnote A which
extends onto
three lines]
[Footnote 2: footnote 2 which has[A] a nested note]
[Footnote A: nested ref in note 2]
This is another[DCCCCLXXXXVIII] reference.
This is another[q] reference.
[Footnote DCCCCLXXXXVIII: footnote DCCCCLXXXXVIII]
[Footnote q: footnote q]
/F
F/
A lame symbol[\u00a7] reference.
Ref to unmatched key[]
/F
this gets no notes
F/
[Footnot zz: orphan note]
[Footnote \u00a7: footnote symbol]
/F
F/
''')
IMC.editWidget.setPlainText(utqs)
IMC.mainWindow = MW
IMC.editWidget.show()
app.exec_()
|
A local move which is also known as an intrastate move takes place within a 100-mile radius and stays within the state of origin. If you’re only moving across town, then you probably need our local Waldwick movers. When moving locally the type of truck that the Waldwick movers use will usually be smaller and the service will usually be performed in one day unless you have a very big house.
A long distance move is anywhere within the United States that is greater than 100- mile radius. This type of move is also recognized as an interstate move. Relocating from state to state is considered to be a long distance move. An interstate process is when the move operates in two or more states. If you’re moving across the country, you will need our Waldwick long distance movers.
Plan your next move with Waldwick Movers and we will guarantee you a smooth and easy transition. We will provide you with the best prices for the highest quality of moving and storage services.
Packing and moving a home can cause a huge chaos in ones life. If managing all your packing needs adds an extra stress on you, then there’s an easy solution to help elevate this task off your list of things to do. Waldwick moving companies offer amazing packing services. Packing services can be supplied for virtually any type of move.
Waldwick moving companies that offer storage services provide individual with either a private locked room or space to be rented out on a monthly basis. This unit is a place where you can store your belongings while you are in transition or just looking for some extra space to keep your items out of the way. If you need to move out of one home before the next is ready you may need to use this service, whereby your Waldwick movers will store your things in a schematically organized warehouse until the day you call and ask them to deliver to your new residence.
|
from __future__ import absolute_import, print_function
import logging
logger = logging.getLogger(__name__)
import os
import numpy as np
import re
import json
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore")
from tifffile import TiffFile, imsave, imread
#from spimagine.lib.tifffile import TiffFile, imsave, imread
from spimagine.lib.czifile import CziFile
def read3dTiff(fName):
return imread(fName)
def write3dTiff(data,fName):
imsave(fName,data)
# def getTiffSize(fName):
# from PIL import Image
# img = Image.open(fName, 'r')
# depth = 0
# while True:
# try:
# img.seek(depth)
# except Exception as e:
# break
# depth += 1
#
# return (depth,)+img.size[::-1]
def readCziFile(fName):
with CziFile(fName) as f:
return np.squeeze(f.asarray())
def parseIndexFile(fname):
"""
returns (t,z,y,z) dimensions of a spim stack
"""
try:
lines = open(fname).readlines()
except IOError:
print("could not open and read ",fname)
return None
items = lines[0].replace("\t",",").split(",")
try:
stackSize = [int(i) for i in items[-4:-1]] +[len(lines)]
except Exception as e:
print(e)
print("couldnt parse ", fname)
return None
stackSize.reverse()
return stackSize
def parseMetaFile(fName):
"""
returns pixelSizes (dx,dy,dz)
"""
with open(fName) as f:
s = f.read()
try:
z1 = np.float(re.findall("StartZ.*",s)[0].split("\t")[2])
z2 = np.float(re.findall("StopZ.*",s)[0].split("\t")[2])
zN = np.float(re.findall("NumberOfPlanes.*",s)[0].split("\t")[2])
return (.162,.162, (1.*z2-z1)/(zN-1.))
except Exception as e:
print(e)
print("coulndt parse ", fName)
return (1.,1.,1.)
def parse_index_xwing(fname):
"""
returns (z,y,z) dimensions of a xwing stack
"""
try:
lines = open(fname).readlines()
except IOError:
print("could not open and read ",fname)
return None
items = lines[0].replace("\t",",").replace("\n","").split(",")
try:
stackSize = [int(i) for i in items[-3:]]
except Exception as e:
print(e)
print("couldnt parse ", fname)
return None
stackSize.reverse()
return stackSize
def parse_meta_xwing(fName):
"""
returns pixelSizes (dx,dy,dz)
"""
with open(fName) as f:
try:
s = json.loads((f.readlines()[0]))
x = float(s["VoxelDimX"])
y = float(s["VoxelDimY"])
z = float(s["VoxelDimZ"])
return (x,y,z)
except Exception as e:
print(e)
print("coulndt parse ", fName)
return (1.,1.,1.)
def fromSpimFolder(fName,dataFileName="data/data.bin",indexFileName="data/index.txt",pos=0,count=1):
stackSize = parseIndexFile(os.path.join(fName,indexFileName))
if stackSize:
# clamp to pos to stackSize
pos = min(pos,stackSize[0]-1)
pos = max(pos,0)
if count>0:
stackSize[0] = min(count,stackSize[0]-pos)
else:
stackSize[0] = max(0,stackSize[0]-pos)
with open(os.path.join(fName,dataFileName),"rb") as f:
f.seek(2*pos*np.prod(stackSize[1:]))
return np.fromfile(f,dtype="<u2",
count=np.prod(stackSize)).reshape(stackSize)
t = time()
ds.append(func(fName))
print("%s\ntime: %.2f ms"%(func.__name__, 1000.*(time()-t)))
assert np.allclose(*ds)
def createSpimFolder(fName,data = None,
stackSize= [10,10,32,32],
stackUnits = (.162,.162,.162)):
if not os.path.exists(fName):
os.makedirs(fName)
if not os.path.exists(os.path.join(fName,"data")):
os.makedirs(os.path.join(fName,"data"))
if not data is None:
stackSize = data.shape
datafName = os.path.join(fName,"data/data.bin")
with open(datafName,"wa") as f:
data.astype(np.uint16).tofile(f)
Nt,Nz,Ny,Nx = stackSize
indexfName = os.path.join(fName,"data/index.txt")
with open(indexfName,"w") as f:
for i in range(Nt):
f.write("%i\t0.0000\t1,%i,%i,%i\t0\n"%(i,Nx,Ny,Nz))
metafName = os.path.join(fName,"metadata.txt")
with open(metafName,"w") as f:
f.write("timelapse.NumberOfPlanes\t=\t%i\t0\n"%Nz)
f.write("timelapse.StartZ\t=\t0\t0\n")
f.write("timelapse.StopZ\t=\t%.2f\t0\n"%(stackUnits[2]*(Nz-1.)))
def test_czi():
d = readCziFile("test_data/retina.czi")
return d
if __name__ == '__main__':
# test_tiff()
d = test_czi()
|
ACURO offer an extensive range of advanced; scientifically formulated treatment products manufactured to the highest international quality and environmental standards for guaranteed performance.
ACURO high performance chemical treatment products are used successfully around the world in many of the most demanding commercial, municipal and industrial process environments where they help to improve productivity, optimize performance and reduce equipment life-cycle costs.
ACURO's full range of scientifically formulated treatment products includes high performance water treatment chemicals for steam boilers, cooling water systems, cooling towers and closed circuits; wastewater and effluent treatment chemicals, high performance industrial chemicals, reverse osmosis membrane products, eco-friendly biological formulations, advanced polymers and much more. For further details simply select a product title.
Sulphamic Acid has very high shelf life.
Sulphamic acid do not require storage/handing arrangement hence no adulteration possible.
Sulphamic acid has very high effectivity of the descaling.
Complete cleaning can be chemically achieved by Sulphamic acid and does not require post descaling manual cleaning.
Sulphamic acid is safe acid, packed in 50 kgs HDPE bags and has no handling hazards.
No storage tanks / system required for dosing.
The solid can be directly charged to system eliminating the cost of system reduces the leave of scaling solids and acts as anti - descalant.
Accidental excess dosing does not affect the metal of the circulating system but it acts to remove the deposited scale from the system.
It is recommended to does into the cooling water for descaling the condenser on running plant.
Alum is essentially aluminium sulphate with the empirical formula Al2(SO4)3.18HO. It is of two types (i) Ferric and (ii) Non Ferric.
Non Ferric Alum is a purer form of aluminum sulphate. It is used in better grades of paper for loading and sizing purposes. Non Ferric Alum is manufactured from aluminium trihydrate whereas ferric alum uses bauxite as the raw material. The raw material is cooked with sulphuric acid under suitable conditions. The resultant product is cooled in suitable moulds to slabs. It is available both, in solid and in liquid forms.
Non ferric alum imparts better brightness to paper when used with rosin for sizing. It forms a complex resinate that spreads evenly over the surface of paper.
It finds other uses in pharmaceutical and mordanting industry. In water purification the highly positively charged aluminum precipitates negative suspensions as sludge.
It is the most versatile and easily used material for sizing paper with rosin.
Highly charged polymer chain polymerization of aluminum ions and colloidal particles in water with high efficiency and power in and bridging flocculation function, can effectively remove the water turbidity, color, heavy metals and trace organic compounds.
1)The function of purification is better than the vitriolic flocculating agent, water purification costis lower than the other flocculating agent around 15-30%.
2) Flocculation body form fast, settlement speed, the processing capacity is much more than the other flocculating agent.
3) No added alkaline additives, such as the case of deliquescence, the results remain unchanged.
4)Suitable pH value of wide, strong adaptability and wide range of uses.
5)To remove heavy metals and radioactive substances leading to water pollution.
Sodium tripolyphosphate is used in meat processing, seafood, frozen shrimp, sausage and modified starch.
Sodium hexametaphosphate (SHMP) is a hexamer of composition (NaPO3)6.Sodium hexametaphosphate of commerce is typically a mixture of polymeric metaphosphates, of which the hexamer is one, and is usually the compound referred to by this name. It is more correctly termed sodium polymetaphosphate. It is prepared by melting monosodium orthophosphate, followed by rapid cooling. SHMP hydrolyzes in aqueous solution, particularly under acidic conditions, to sodium trimetaphosphate and sodium orthophosphate.
SHMP is used as a sequestrant and has applications within a wide variety of industries, including as a food additive.
Sodium carbonate is sometimes added to SHMP to raise the pH to 8.0-8.6, which produces a number of SHMP products used for water softening and detergents. Also used as a dispersing agent to break down clay and other soil types.One of the lesser-known uses for sodium hexametaphosphate is as a deflocculant in the making of terra sigillata, a ceramic technique using a fine particled slip.
Hydrated lime is a dry powder manufactured by treating quicklime with sufficient water to satisfy its chemical affinity for water, thereby converting the oxides to hydroxides.Hydrated lime is available only as a fine powder or a slurry. Normal grades of hydrated lime suitable for most chemical purposes will have 85 percent or more passing a 200-mesh sieve, while for special applications hydrated lime may be obtained as fine as 99.5 percent passing a 325-mesh sieve.
Hydrated Lime is used mainly in Sugar Refining, Paper processing, Leather Treatment, Flue Gas Treatment, Manufacturing of Di Calcium Phosphate, Paint Applications, Steel Ferro Alloys, Agricultural applications, soil Stabilization, Water Treatment, Construction, Pharmaceutical and Masonry applications etc.
Trisodium phosphate (TSP) is the inorganic compound with the formula Na3PO4. It is a white, granular or crystalline solid, highly soluble in water producing an alkaline solution. TSPs are used as cleaning agent, lubricant, food additive, stain remover and degreaser.
Alum is essentially aluminium sulphate with empirical formula Al2 (SO4)3.18HO. It is of two types (i) Ferric and (ii) Non Ferric.
Ferric alum is a less pure form of aluminum sulphate. It is used in lower grades of paper for loading and sizing purposes. Ferric alum uses bauxite as the raw material. The raw material is cooked with sulphuric acid under suitable conditions. The resultant product is sold in liquid form.
Ferric alum imparts better brightness to paper when used with rosin for sizing. It forms a complex resinate that spreads evenly over the surface of paper.
Ferric Chloride is also called Iron(III) Chloride. It is soluble in water and alcohol and is non-combustible. When dissolved in water, ferric chloride undergoes hydrolysis and gives off heat in an exothermic reaction. The resulting brown, acidic, and corrosive solution is used as a flocculant in sewage treatment and drinking water production, and as an etchant for copper-based metals in printed circuit boards.
Ferric Chloride is also called Iron(III) Chloride. It is soluble in water and alcohol and is non-combustible.When dissolved in water, ferric chloride undergoes hydrolysis and gives off heat in an exothermic reaction. The resulting brown, acidic, and corrosive solution is used as a flocculant in sewage treatment and drinking water production, and as an etchant for copper-based metals in printed circuit boards.
Sanitizer of wood, stainless steel, concrete, tile, etc. surfaces.
Control microorganisms in sewage, wastewater and industrial process water systems.
Disinfection of floors, walls, and other surfaces of barns, pens, stalls, chutes, and other facilities occupied and/or traversed by animals or poultry.
Disinfection of troughs, rack and other feeding and watering appliances, including feed racks, mangers, troughs, automatic feeders, fountains, and waterers.
Disinfection of halters, ropes, and other types of equipment used in handling and/or restraining animals and/or poultry.
Disinfection of forks, shovels, and scrapers used for removing litter and manure.
Sanitizer of surfaces: wooden butcher blocks, stainless steel tops, concrete floors, tile walls, etc.
Disinfection of non-porous hard surfaces: tile, glass, stainless steel, fiberglass, etc.
Sanitation and disinfection of utensils, and equipment used in preparation, manufacture and filling.
Sanitation and disinfection of bottles, etc.
Sanitation and disinfection of utensils, and equipment used in preparation, and manufacture.
Sanitation and disinfection of food containers and food contact surfaces.
Sanitation and disinfection of work areas.
Sanitation and disinfection of wastewater.
Citric acid is a weak organic acid found in citrus fruits.
It is a natural preservative and is also used to add an acidic (sour) taste to foods and soft drinks.
Citric acid exists in a variety of fruits and vegetables, but it is most concentrated in lemons and limes, where it can comprise as much as 8 percent of the dry weight of the fruit.
Cyclohexylamine is an organic compound, belonging to the aliphatic amine class. It is a colorless liquid, although like many amines, samples are often colored due to contaminants. It has a fishy odor and is miscible with water. Like other amines, it is aweak base, compared to strong bases such as NaOH, but it is a stronger base than itsaromatic analog, aniline.
It is a useful intermediate in the production of many other organic compounds. It is a metabolite of cyclamate.
EDTA (ethylenediaminetetraacetic acid) is an amino acid compound, a powerful chelating agent - meaning it attaches to plaque build up and heavy metals and removes them naturally from the body. EDTA is recognized by the body and easily assimilated.
EDTA is one of the most powerful metal chelators known. However, EDTA has become a commonly known name. There are actually many forms and chemical formulas for the same basic product called EDTA. All are formulated to remove metals, but for different purposes. Industrial Grade EDTA is used in batteries and for other practical purposes. Food Grade EDTA is used to protect us to some degree from harmful metals that find their way into the foods we eat. The sodium and calcium salts of EDTA (ethylenediaminetetraacetic acid) are common sequestrants in many kinds of foods and beverages.
And Pharmaceutical Grade EDTA is used in the best chelation products for its primary function, that of removing unwanted metals (in particular Calcium, Mercury, Lead, Cadmium & Arsenic) from the body's organs and cardiovascular system.
Solubility: Soluble in water (100 g/l) at 20 °C, and 3 M NaOH (100 mg/ml).
Ammoniated EDTA is an aqueous solution of the ammonium salt of ethylenediaminetetraacetate (EDTA). It is used successfully to remove calcium and other types of scale from boilers, evaporators, heat exchangers, filter cloths and glass-lined kettles, and also to prevent scale formation.
EDTA Tetrasodium Powder is used in Agriculture, Building & Construction, Industrial & Household cleaning detergents, Feed Additives, Food Fortification, Food Preservation, Gas Sweetening, Metal Plating and Electronics, Oil Industry, Personal Care, Pharma, Photography, Polymer Production, Printing Ink, Pulp and Paper, Textiles.
Sodium sulfite (sodium sulphite) is a soluble sodium salt of sulfurous acid (sulfite) with the chemical formula Na2SO3. It is a product of sulfur dioxide scrubbing, a part of the flue-gas desulfurization process. It is also used as a preservative to prevent dried fruit from discoloring, and for preserving meats, and is used in the same way as sodium thiosulfateto convert elemental halogens to their respective hydrohalic acids, in photography and for reducing chlorine levels in pools.
Sodium sulfite is primarily used in the pulp and paper industry. It is used in water treatment as an oxygen scavenger agent, in the photographic industry to protect developer solutions from oxidation and (as hypo clear solution) to wash fixer (sodium thiosulfate) from film and photo-paper emulsions, in the textile industry as a bleaching, desulfurizing and dechlorinating agent and in the leather trade for the sulfitization of tanning extracts. It is used in the purification of TNT for military use. It is used in chemical manufacturing as a sulfonation and sulfomethylation agent. It is used in the production of sodium thiosulfate. It is used in other applications, including froth flotation of ores, oil recovery, food preservatives, and making dyes.
Sodium metabisulfite or sodium pyrosulfite is an inorganic compound of chemical formula Na2S2O5. The substance is sometimes referred to as disodium (metabisulfite). It is used as a disinfectant, antioxidant and preservative agent.
2.It is commonly used in homebrewing and winemaking to sanitize equipment. It is used as a cleaning agent for potable water reverse osmosis membranes in desalination systems. It is also used to remove chloramine from drinking water after treatment.
Sodium Bisulfite Solution 25-35% is a clear, yellow liquid with a sulfur dioxide odor.
Sodium Bisulphite is widely used in water treatment as a dechlorination agent.
Looking for Water Treatment Chemical ?
|
#!/usr/bin/env python2.3
"""Report on the number of currently waiting clients in the ZEO queue.
Usage: %(PROGRAM)s [options] logfile
Options:
-h / --help
Print this help text and exit.
-v / --verbose
Verbose output
-f file
--file file
Use the specified file to store the incremental state as a pickle. If
not given, %(STATEFILE)s is used.
-r / --reset
Reset the state of the tool. This blows away any existing state
pickle file and then exits -- it does not parse the file. Use this
when you rotate log files so that the next run will parse from the
beginning of the file.
"""
from __future__ import print_function
import os
import re
import sys
import time
import errno
import getopt
from ZEO._compat import load, dump
COMMASPACE = ', '
STATEFILE = 'zeoqueue.pck'
PROGRAM = sys.argv[0]
tcre = re.compile(r"""
(?P<ymd>
\d{4}- # year
\d{2}- # month
\d{2}) # day
T # separator
(?P<hms>
\d{2}: # hour
\d{2}: # minute
\d{2}) # second
""", re.VERBOSE)
ccre = re.compile(r"""
zrpc-conn:(?P<addr>\d+.\d+.\d+.\d+:\d+)\s+
calling\s+
(?P<method>
\w+) # the method
\( # args open paren
\' # string quote start
(?P<tid>
\S+) # first argument -- usually the tid
\' # end of string
(?P<rest>
.*) # rest of line
""", re.VERBOSE)
wcre = re.compile(r'Clients waiting: (?P<num>\d+)')
def parse_time(line):
"""Return the time portion of a zLOG line in seconds or None."""
mo = tcre.match(line)
if mo is None:
return None
date, time_ = mo.group('ymd', 'hms')
date_l = [int(elt) for elt in date.split('-')]
time_l = [int(elt) for elt in time_.split(':')]
return int(time.mktime(date_l + time_l + [0, 0, 0]))
class Txn:
"""Track status of single transaction."""
def __init__(self, tid):
self.tid = tid
self.hint = None
self.begin = None
self.vote = None
self.abort = None
self.finish = None
self.voters = []
def isactive(self):
if self.begin and not (self.abort or self.finish):
return True
else:
return False
class Status:
"""Track status of ZEO server by replaying log records.
We want to keep track of several events:
- The last committed transaction.
- The last committed or aborted transaction.
- The last transaction that got the lock but didn't finish.
- The client address doing the first vote of a transaction.
- The number of currently active transactions.
- The number of reported queued transactions.
- Client restarts.
- Number of current connections (but this might not be useful).
We can observe these events by reading the following sorts of log
entries:
2002-12-16T06:16:05 BLATHER(-100) zrpc:12649 calling
tpc_begin('\x03I\x90((\xdbp\xd5', '', 'QueueCatal...
2002-12-16T06:16:06 BLATHER(-100) zrpc:12649 calling
vote('\x03I\x90((\xdbp\xd5')
2002-12-16T06:16:06 BLATHER(-100) zrpc:12649 calling
tpc_finish('\x03I\x90((\xdbp\xd5')
2002-12-16T10:46:10 INFO(0) ZSS:12649:1 Transaction blocked waiting
for storage. Clients waiting: 1.
2002-12-16T06:15:57 BLATHER(-100) zrpc:12649 connect from
('10.0.26.54', 48983): <ManagedServerConnection ('10.0.26.54', 48983)>
2002-12-16T10:30:09 INFO(0) ZSS:12649:1 disconnected
"""
def __init__(self):
self.lineno = 0
self.pos = 0
self.reset()
def reset(self):
self.commit = None
self.commit_or_abort = None
self.last_unfinished = None
self.n_active = 0
self.n_blocked = 0
self.n_conns = 0
self.t_restart = None
self.txns = {}
def iscomplete(self):
# The status report will always be complete if we encounter an
# explicit restart.
if self.t_restart is not None:
return True
# If we haven't seen a restart, assume that seeing a finished
# transaction is good enough.
return self.commit is not None
def process_file(self, fp):
if self.pos:
if VERBOSE:
print('seeking to file position', self.pos)
fp.seek(self.pos)
while True:
line = fp.readline()
if not line:
break
self.lineno += 1
self.process(line)
self.pos = fp.tell()
def process(self, line):
if line.find("calling") != -1:
self.process_call(line)
elif line.find("connect") != -1:
self.process_connect(line)
# test for "locked" because word may start with "B" or "b"
elif line.find("locked") != -1:
self.process_block(line)
elif line.find("Starting") != -1:
self.process_start(line)
def process_call(self, line):
mo = ccre.search(line)
if mo is None:
return
called_method = mo.group('method')
# Exit early if we've got zeoLoad, because it's the most
# frequently called method and we don't use it.
if called_method == "zeoLoad":
return
t = parse_time(line)
meth = getattr(self, "call_%s" % called_method, None)
if meth is None:
return
client = mo.group('addr')
tid = mo.group('tid')
rest = mo.group('rest')
meth(t, client, tid, rest)
def process_connect(self, line):
pass
def process_block(self, line):
mo = wcre.search(line)
if mo is None:
# assume that this was a restart message for the last blocked
# transaction.
self.n_blocked = 0
else:
self.n_blocked = int(mo.group('num'))
def process_start(self, line):
if line.find("Starting ZEO server") != -1:
self.reset()
self.t_restart = parse_time(line)
def call_tpc_begin(self, t, client, tid, rest):
txn = Txn(tid)
txn.begin = t
if rest[0] == ',':
i = 1
while rest[i].isspace():
i += 1
rest = rest[i:]
txn.hint = rest
self.txns[tid] = txn
self.n_active += 1
self.last_unfinished = txn
def call_vote(self, t, client, tid, rest):
txn = self.txns.get(tid)
if txn is None:
print("Oops!")
txn = self.txns[tid] = Txn(tid)
txn.vote = t
txn.voters.append(client)
def call_tpc_abort(self, t, client, tid, rest):
txn = self.txns.get(tid)
if txn is None:
print("Oops!")
txn = self.txns[tid] = Txn(tid)
txn.abort = t
txn.voters = []
self.n_active -= 1
if self.commit_or_abort:
# delete the old transaction
try:
del self.txns[self.commit_or_abort.tid]
except KeyError:
pass
self.commit_or_abort = txn
def call_tpc_finish(self, t, client, tid, rest):
txn = self.txns.get(tid)
if txn is None:
print("Oops!")
txn = self.txns[tid] = Txn(tid)
txn.finish = t
txn.voters = []
self.n_active -= 1
if self.commit:
# delete the old transaction
try:
del self.txns[self.commit.tid]
except KeyError:
pass
if self.commit_or_abort:
# delete the old transaction
try:
del self.txns[self.commit_or_abort.tid]
except KeyError:
pass
self.commit = self.commit_or_abort = txn
def report(self):
print("Blocked transactions:", self.n_blocked)
if not VERBOSE:
return
if self.t_restart:
print("Server started:", time.ctime(self.t_restart))
if self.commit is not None:
t = self.commit_or_abort.finish
if t is None:
t = self.commit_or_abort.abort
print("Last finished transaction:", time.ctime(t))
# the blocked transaction should be the first one that calls vote
L = [(txn.begin, txn) for txn in self.txns.values()]
L.sort()
for x, txn in L:
if txn.isactive():
began = txn.begin
if txn.voters:
print("Blocked client (first vote):", txn.voters[0])
print("Blocked transaction began at:", time.ctime(began))
print("Hint:", txn.hint)
print("Idle time: %d sec" % int(time.time() - began))
break
def usage(code, msg=''):
print(__doc__ % globals(), file=sys.stderr)
if msg:
print(msg, file=sys.stderr)
sys.exit(code)
def main():
global VERBOSE
VERBOSE = 0
file = STATEFILE
reset = False
# -0 is a secret option used for testing purposes only
seek = True
try:
opts, args = getopt.getopt(sys.argv[1:], 'vhf:r0',
['help', 'verbose', 'file=', 'reset'])
except getopt.error as msg:
usage(1, msg)
for opt, arg in opts:
if opt in ('-h', '--help'):
usage(0)
elif opt in ('-v', '--verbose'):
VERBOSE += 1
elif opt in ('-f', '--file'):
file = arg
elif opt in ('-r', '--reset'):
reset = True
elif opt == '-0':
seek = False
if reset:
# Blow away the existing state file and exit
try:
os.unlink(file)
if VERBOSE:
print('removing pickle state file', file)
except OSError as e:
if e.errno != errno.ENOENT:
raise
return
if not args:
usage(1, 'logfile is required')
if len(args) > 1:
usage(1, 'too many arguments: %s' % COMMASPACE.join(args))
path = args[0]
# Get the previous status object from the pickle file, if it is available
# and if the --reset flag wasn't given.
status = None
try:
statefp = open(file, 'rb')
try:
status = load(statefp)
if VERBOSE:
print('reading status from file', file)
finally:
statefp.close()
except IOError as e:
if e.errno != errno.ENOENT:
raise
if status is None:
status = Status()
if VERBOSE:
print('using new status')
if not seek:
status.pos = 0
fp = open(path, 'rb')
try:
status.process_file(fp)
finally:
fp.close()
# Save state
statefp = open(file, 'wb')
dump(status, statefp, 1)
statefp.close()
# Print the report and return the number of blocked clients in the exit
# status code.
status.report()
sys.exit(status.n_blocked)
if __name__ == "__main__":
main()
|
The internet is broken, but this website isn’t, nor is our will to answer questions about potential intrastaff desert-island cannibalism. Ask us stuff below.
Update, 5:04 p.m.: We’re done. Thanks.
|
# Copyright 2018 Google Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" This template creates a network, optionally with subnetworks. """
def append_optional_property(res, properties, prop_name):
""" If the property is set, it is added to the resource. """
val = properties.get(prop_name)
if val:
res['properties'][prop_name] = val
return
def generate_config(context):
""" Entry point for the deployment resources. """
properties = context.properties
name = properties.get('name', context.env['name'])
network_self_link = '$(ref.{}.selfLink)'.format(context.env['name'])
network_resource = {
# https://cloud.google.com/compute/docs/reference/rest/v1/networks/insert
'type': 'gcp-types/compute-v1:networks',
'name': context.env['name'],
'properties':
{
'name': name,
'autoCreateSubnetworks': properties.get('autoCreateSubnetworks', False)
}
}
optional_properties = [
'description',
'routingConfig',
'project',
]
for prop in optional_properties:
append_optional_property(network_resource, properties, prop)
resources = [network_resource]
# Subnetworks:
out = {}
for i, subnetwork in enumerate(
properties.get('subnetworks', []), 1
):
subnetwork['network'] = network_self_link
if properties.get('project'):
subnetwork['project'] = properties.get('project')
subnetwork_name = 'subnetwork-{}'.format(i)
resources.append(
{
'name': subnetwork_name,
'type': 'subnetwork.py',
'properties': subnetwork
}
)
out[subnetwork_name] = {
'selfLink': '$(ref.{}.selfLink)'.format(subnetwork_name),
'ipCidrRange': '$(ref.{}.ipCidrRange)'.format(subnetwork_name),
'region': '$(ref.{}.region)'.format(subnetwork_name),
'network': '$(ref.{}.network)'.format(subnetwork_name),
'gatewayAddress': '$(ref.{}.gatewayAddress)'.format(subnetwork_name)
}
return {
'resources':
resources,
'outputs':
[
{
'name': 'name',
'value': name
},
{
'name': 'selfLink',
'value': network_self_link
},
{
'name': 'subnetworks',
'value': out
}
]
}
|
On Sunday 7th May, parents-to-be are invited to the first pregnancy open day of the year at The Portland Hospital, to tour its world-class maternity facilities and meet consultants, midwives and breastfeeding specialists.
This free event gives expectant mothers and their partners the opportunity to tour the hospitals’ extensive maternity facilities, which includes five private delivery rooms, four operating theatres, 18 private bedrooms with en-suites, as well The Portland Hospital’s on-site nursery.
Visitors will also be able to take a look at examples of babies moving inside the womb with state-of-the-art 3D/4D ultrasound technology, and will have the opportunity to have a 15 minute consultation with a midwife or consultant who will be available to discuss different pregnancy and birthing options, answer questions and provide one-to-one advice.
“Our pregnancy open days are a great way for pregnant women and their partners to meet the multidisciplinary teams that provide care and see the fantastic facilities that The Portland Hospital has to deliver genuinely personalised, safe and world class antenatal, delivery and postnatal care.
Those attending will also learn about the dedicated antenatal and postnatal classes available to parents-to-be, and mums-to-be will have the opportunity to have a ‘Bump Photo’ taken courtesy of Imagethirst photography, and will leaving with a complimentary goodie bag. Refreshments are also available throughout the day.
The pregnancy open day event will take place on Sunday 7th May 2017 from 10.30am-4pm, at The Portland Hospital, 205-209 Great Portland Street, London W1W 5AH. Visitors can register for free for this event here. The Portland Hospital is also running a further two open days this year which you can also pre-register for here. In addition to the open day, you can register for the Pregnancy Meet and Greet service, which is a free 30 minute appointment with a midwife to help you decide which route you want to take in your pregnancy, you can register here.
For further information please call the Maternity Enquiry Line on 020 7390 6068.
Please note there are no costs incurred to attend this event, and all parents-to-be are invited to attend it will provide a brilliant opportunity for visitors to find out important information about pregnancy and birthing options, meet the staff, midwives and consultants and tour the hospital’s facilities. Visitors need to register in advance for the event.
|
#!/usr/bin/env python
##################################################
# Gnuradio Python Flow Graph
# Title: TPC Encoder Test
# Generated:
##################################################
from gnuradio import gr, gr_unittest
from gnuradio import blocks
from fec.extended_decoder_interface import extended_decoder_interface
import fec
import struct
import os
class qa_tpc_decoder_withnoise(gr_unittest.TestCase):
def setUp(self):
self.tb = gr.top_block()
self.testFilesDir = os.path.join(os.environ['srcdir'], '..', 'testdata')
def tearDown(self):
self.tb = None
def readBinaryFile(self, filename):
fileData = ()
f = open(filename, 'rb')
try:
# read the file, this function is expecting a binary file and will
# read the file as unsigned chars
byte = f.read(1)
while byte != "":
# put the byte into the return vector
fileData = fileData + (byte,)
byte = f.read(1)
finally:
f.close()
return map(ord, fileData)
def readFloatFile(self, filename):
f = open(filename, 'rb')
n = 288
fileData = struct.unpack('f'*n, f.read(4*n))
f.close()
return fileData
def runIt(self, decoderInputFilename, decoderOutputFilename, rowPoly, colPoly, kRow, kCol, B, Q,
numIter, decType):
decoderInputLLR = self.readFloatFile(os.path.join(self.testFilesDir, decoderInputFilename))
decoderExpectedOutput = self.readBinaryFile(os.path.join(self.testFilesDir, decoderOutputFilename))
# define the required components
self.variable_cc_def_fecapi_tpc_decoder_def_0 = \
variable_cc_def_fecapi_tpc_decoder_def_0 = \
map( (lambda a: fec.tpc_make_decoder((rowPoly), (colPoly), kRow, kCol, B, Q, numIter, decType)), range(0,1) );
self.variable_decoder_interface_0 = \
variable_decoder_interface_0 = \
extended_decoder_interface(decoder_obj_list=variable_cc_def_fecapi_tpc_decoder_def_0,
threading='capillary',
ann=None,
puncpat='11',
integration_period=10000)
# setup connections of flowgraph
self.blocks_vector_source_x_0 = blocks.vector_source_f(decoderInputLLR, False, 1, [])
self.blocks_vector_sink_x_0 = blocks.vector_sink_b(1)
self.tb.connect((self.blocks_vector_source_x_0, 0), (self.variable_decoder_interface_0, 0))
self.tb.connect((self.variable_decoder_interface_0, 0), (self.blocks_vector_sink_x_0, 0))
# run the block
self.tb.run()
# check output versus expectedOutputData
actualOutputData = self.blocks_vector_sink_x_0.data()
actualOutputDataList = list(actualOutputData)
actualOutputDataList_int = map(int, actualOutputDataList)
print type(decoderExpectedOutput)
print type(actualOutputDataList_int)
print '******** DECODER EXPECTED OUTPUT *********'
print decoderExpectedOutput
print '******** DECODER ACTUAL OUTPUT ***********'
print actualOutputDataList_int
outputLen = len(decoderExpectedOutput)
self.assertFloatTuplesAlmostEqual(decoderExpectedOutput, actualOutputDataList_int, outputLen)
def test_004_tpc_decoder(self):
print 'RUNNING NOISE TEST 4'
inputFilename = 'snrtest_4_input.bin'
outputFilename = 'snrtest_4_output.bin'
# the definitions below MUST match the octave test script
rowPoly = [3]
colPoly = [43]
kRow = 26
kCol = 6
B = 9
Q = 3
numIters = 6
decoderType = 2
self.runIt(inputFilename, outputFilename, rowPoly, colPoly, kRow, kCol, B, Q, numIters, decoderType)
def test_005_tpc_decoder(self):
print 'RUNNING NOISE TEST 5'
inputFilename = 'snrtest_5_input.bin'
outputFilename = 'snrtest_5_output.bin'
# the definitions below MUST match the octave test script
rowPoly = [3]
colPoly = [43]
kRow = 26
kCol = 6
B = 9
Q = 3
numIters = 6
decoderType = 2
self.runIt(inputFilename, outputFilename, rowPoly, colPoly, kRow, kCol, B, Q, numIters, decoderType)
def test_006_tpc_decoder(self):
print 'RUNNING NOISE TEST 6'
inputFilename = 'snrtest_6_input.bin'
outputFilename = 'snrtest_6_output.bin'
# the definitions below MUST match the octave test script
rowPoly = [3]
colPoly = [43]
kRow = 26
kCol = 6
B = 9
Q = 3
numIters = 6
decoderType = 2
self.runIt(inputFilename, outputFilename, rowPoly, colPoly, kRow, kCol, B, Q, numIters, decoderType)
def test_007_tpc_decoder(self):
print 'RUNNING NOISE TEST 7'
inputFilename = 'snrtest_7_input.bin'
outputFilename = 'snrtest_7_output.bin'
# the definitions below MUST match the octave test script
rowPoly = [3]
colPoly = [43]
kRow = 26
kCol = 6
B = 9
Q = 3
numIters = 6
decoderType = 2
self.runIt(inputFilename, outputFilename, rowPoly, colPoly, kRow, kCol, B, Q, numIters, decoderType)
def test_008_tpc_decoder(self):
print 'RUNNING NOISE TEST 8'
inputFilename = 'snrtest_8_input.bin'
outputFilename = 'snrtest_8_output.bin'
# the definitions below MUST match the octave test script
rowPoly = [3]
colPoly = [43]
kRow = 26
kCol = 6
B = 9
Q = 3
numIters = 6
decoderType = 2
self.runIt(inputFilename, outputFilename, rowPoly, colPoly, kRow, kCol, B, Q, numIters, decoderType)
def test_009_tpc_decoder(self):
print 'RUNNING NOISE TEST 9'
inputFilename = 'snrtest_9_input.bin'
outputFilename = 'snrtest_9_output.bin'
# the definitions below MUST match the octave test script
rowPoly = [3]
colPoly = [43]
kRow = 26
kCol = 6
B = 9
Q = 3
numIters = 6
decoderType = 2
self.runIt(inputFilename, outputFilename, rowPoly, colPoly, kRow, kCol, B, Q, numIters, decoderType)
def test_010_tpc_decoder(self):
print 'RUNNING NOISE TEST 10'
inputFilename = 'snrtest_10_input.bin'
outputFilename = 'snrtest_10_output.bin'
# the definitions below MUST match the octave test script
rowPoly = [3]
colPoly = [43]
kRow = 26
kCol = 6
B = 9
Q = 3
numIters = 6
decoderType = 2
self.runIt(inputFilename, outputFilename, rowPoly, colPoly, kRow, kCol, B, Q, numIters, decoderType)
if __name__=='__main__':
gr_unittest.run(qa_tpc_decoder_withnoise)
|
Ahead of Rangers’ Scottish Cup fixture against Cowdenbeath, there was great interest in the team selection.
Against a lower league opposition, would Gerrard be ruthless or give game time to fringe players?
When the starting XI was released an hour before kick-off, it’s fair to say not everybody was happy.
There were eight changes from the side that defeated Livingston on Sunday.
The formation was also altered, with Gerrard opting for two strikers.
Amongst those given opportunities, were Wes Foderingham, Jon Flanagan and Lassana Coulibaly, none of whom had featured since before the winter break.
The only three players who retained their places, were Nikola Katic, Ryan Jack and Daniel Candeias.
Reading into those selections, Gerrard may have shown the fans who his main man is for 2019.
The retentions of Candeias and Katic looked to be simply a reward for their weekend performances, after Gerrard had left them out against Killie.
However, the selection of the ex-Aberdeen skipper was different.
He became the only player to start every match since the winter break.
Even the captain, James Tavernier, got the night off.
Yet, Gerrard clearly felt he couldn’t go into the tie without the safety blanket of Jack in front of the back four.
That’s an indication of just how important the 26-year-old is for the run in.
|
"""
mfbas module. Contains the ModflowBas class. Note that the user can access
the ModflowBas class as `flopy.modflow.ModflowBas`.
Additional information for this MODFLOW package can be found at the `Online
MODFLOW Guide
<http://water.usgs.gov/ogw/modflow/MODFLOW-2005-Guide/index.html?bas6.htm>`_.
"""
import sys
import numpy as np
from ..pakbase import Package
from ..utils import Util3d, check, get_neighbors
class ModflowBas(Package):
"""
MODFLOW Basic Package Class.
Parameters
----------
model : model object
The model object (of type :class:`flopy.modflow.mf.Modflow`) to which
this package will be added.
ibound : array of ints, optional
The ibound array (the default is 1).
strt : array of floats, optional
An array of starting heads (the default is 1.0).
ifrefm : bool, optional
Indication if data should be read using free format (the default is
True).
ixsec : bool, optional
Indication of whether model is cross sectional or not (the default is
False).
ichflg : bool, optional
Flag indicating that flows between constant head cells should be
calculated (the default is False).
stoper : float
percent discrepancy that is compared to the budget percent discrepancy
continue when the solver convergence criteria are not met. Execution
will unless the budget percent discrepancy is greater than stoper
(default is None). MODFLOW-2005 only
hnoflo : float
Head value assigned to inactive cells (default is -999.99).
extension : str, optional
File extension (default is 'bas').
unitnumber : int, optional
FORTRAN unit number for this package (default is None).
filenames : str or list of str
Filenames to use for the package. If filenames=None the package name
will be created using the model name and package extension. If a single
string is passed the package name will be set to the string.
Default is None.
Attributes
----------
heading : str
Text string written to top of package input file.
options : list of str
Can be either or a combination of XSECTION, CHTOCH or FREE.
ifrefm : bool
Indicates whether or not packages will be written as free format.
Methods
-------
See Also
--------
Notes
-----
Examples
--------
>>> import flopy
>>> m = flopy.modflow.Modflow()
>>> bas = flopy.modflow.ModflowBas(m)
"""
@staticmethod
def ftype():
return 'BAS6'
@staticmethod
def defaultunit():
return 13
def __init__(self, model, ibound=1, strt=1.0, ifrefm=True, ixsec=False,
ichflg=False, stoper=None, hnoflo=-999.99, extension='bas',
unitnumber=None, filenames=None):
"""
Package constructor.
"""
if unitnumber is None:
unitnumber = ModflowBas.defaultunit()
# set filenames
if filenames is None:
filenames = [None]
elif isinstance(filenames, str):
filenames = [filenames]
# Fill namefile items
name = [ModflowBas.ftype()]
units = [unitnumber]
extra = ['']
# set package name
fname = [filenames[0]]
# Call ancestor's init to set self.parent, extension, name and unit number
Package.__init__(self, model, extension=extension, name=name,
unit_number=units, extra=extra, filenames=fname)
self.url = 'bas6.htm'
nrow, ncol, nlay, nper = self.parent.nrow_ncol_nlay_nper
self.ibound = Util3d(model, (nlay, nrow, ncol), np.int, ibound,
name='ibound', locat=self.unit_number[0])
self.strt = Util3d(model, (nlay, nrow, ncol), np.float32, strt,
name='strt', locat=self.unit_number[0])
self.heading = '# {} package for '.format(self.name[0]) + \
' {}, '.format(model.version_types[model.version]) + \
'generated by Flopy.'
self.options = ''
self.ixsec = ixsec
self.ichflg = ichflg
self.stoper = stoper
#self.ifrefm = ifrefm
#model.array_free_format = ifrefm
model.free_format_input = ifrefm
self.hnoflo = hnoflo
self.parent.add_package(self)
return
@property
def ifrefm(self):
return self.parent.free_format_input
def __setattr__(self, key, value):
if key == "ifrefm":
self.parent.free_format_input = value
else:
super(ModflowBas,self).__setattr__(key,value)
def check(self, f=None, verbose=True, level=1):
"""
Check package data for common errors.
Parameters
----------
f : str or file handle
String defining file name or file handle for summary file
of check method output. If a sting is passed a file handle
is created. If f is None, check method does not write
results to a summary file. (default is None)
verbose : bool
Boolean flag used to determine if check method results are
written to the screen
level : int
Check method analysis level. If level=0, summary checks are
performed. If level=1, full checks are performed.
Returns
-------
None
Examples
--------
>>> import flopy
>>> m = flopy.modflow.Modflow.load('model.nam')
>>> m.bas6.check()
"""
chk = check(self, f=f, verbose=verbose, level=level)
neighbors = get_neighbors(self.ibound.array)
neighbors[np.isnan(neighbors)] = 0 # set neighbors at edges to 0 (inactive)
chk.values(self.ibound.array,
(self.ibound.array > 0) & np.all(neighbors < 1, axis=0),
'isolated cells in ibound array', 'Warning')
chk.values(self.ibound.array, np.isnan(self.ibound.array),
error_name='Not a number', error_type='Error')
chk.summarize()
return chk
def write_file(self, check=True):
"""
Write the package file.
Parameters
----------
check : boolean
Check package data for common errors. (default True)
Returns
-------
None
"""
if check: # allows turning off package checks when writing files at model level
self.check(f='{}.chk'.format(self.name[0]), verbose=self.parent.verbose, level=1)
# Open file for writing
f_bas = open(self.fn_path, 'w')
# First line: heading
#f_bas.write('%s\n' % self.heading)
f_bas.write('{0:s}\n'.format(self.heading))
# Second line: format specifier
self.options = ''
if self.ixsec:
self.options += 'XSECTION'
if self.ichflg:
self.options += ' CHTOCH'
if self.ifrefm:
self.options += ' FREE'
if self.stoper is not None:
self.options += ' STOPERROR {0}'.format(self.stoper)
f_bas.write('{0:s}\n'.format(self.options))
# IBOUND array
f_bas.write(self.ibound.get_file_entry())
# Head in inactive cells
f_bas.write('{0:15.6G}\n'.format(self.hnoflo))
# Starting heads array
f_bas.write(self.strt.get_file_entry())
# Close file
f_bas.close()
@staticmethod
def load(f, model, nlay=None, nrow=None, ncol=None, ext_unit_dict=None, check=True):
"""
Load an existing package.
Parameters
----------
f : filename or file handle
File to load.
model : model object
The model object (of type :class:`flopy.modflow.mf.Modflow`) to
which this package will be added.
nlay, nrow, ncol : int, optional
If not provided, then the model must contain a discretization
package with correct values for these parameters.
ext_unit_dict : dictionary, optional
If the arrays in the file are specified using EXTERNAL,
or older style array control records, then `f` should be a file
handle. In this case ext_unit_dict is required, which can be
constructed using the function
:class:`flopy.utils.mfreadnam.parsenamefile`.
check : boolean
Check package data for common errors. (default True)
Returns
-------
bas : ModflowBas object
ModflowBas object (of type :class:`flopy.modflow.ModflowBas`)
Examples
--------
>>> import flopy
>>> m = flopy.modflow.Modflow()
>>> bas = flopy.modflow.ModflowBas.load('test.bas', m, nlay=1, nrow=10,
>>> ncol=10)
"""
if model.verbose:
sys.stdout.write('loading bas6 package file...\n')
if not hasattr(f, 'read'):
filename = f
f = open(filename, 'r')
#dataset 0 -- header
while True:
line = f.readline()
if line[0] != '#':
break
#dataset 1 -- options
line = line.upper()
opts = line.strip().split()
ixsec = False
ichflg = False
ifrefm = False
iprinttime = False
ishowp = False
istoperror = False
stoper = None
if 'XSECTION' in opts:
ixsec = True
if 'CHTOCH' in opts:
ichflg = True
if 'FREE' in opts:
ifrefm = True
if 'PRINTTIME' in opts:
iprinttime = True
if 'SHOWPROGRESS' in opts:
ishowp = True
if 'STOPERROR' in opts:
istoperror = True
i = opts.index('STOPERROR')
stoper = np.float32(opts[i+1])
#get nlay,nrow,ncol if not passed
if nlay is None and nrow is None and ncol is None:
nrow, ncol, nlay, nper = model.get_nrow_ncol_nlay_nper()
#dataset 2 -- ibound
ibound = Util3d.load(f, model, (nlay, nrow, ncol), np.int, 'ibound',
ext_unit_dict)
#print ibound.array
#dataset 3 -- hnoflo
line = f.readline()
hnoflo = np.float32(line.strip().split()[0])
#dataset 4 -- strt
strt = Util3d.load(f, model, (nlay, nrow, ncol), np.float32, 'strt',
ext_unit_dict)
f.close()
# set package unit number
unitnumber = None
filenames = [None]
if ext_unit_dict is not None:
unitnumber, filenames[0] = \
model.get_ext_dict_attr(ext_unit_dict,
filetype=ModflowBas.ftype())
#create bas object and return
bas = ModflowBas(model, ibound=ibound, strt=strt,
ixsec=ixsec, ifrefm=ifrefm, ichflg=ichflg,
stoper=stoper, hnoflo=hnoflo,
unitnumber=unitnumber, filenames=filenames)
if check:
bas.check(f='{}.chk'.format(bas.name[0]), verbose=bas.parent.verbose, level=0)
return bas
|
The Buick Enclave and Encore make up a formidable one-two punch for the brand in the currently booming crossover market. Therefore, it simply makes sense that the brand would look to create new and improved versions of those models to attract even more customers to the segments. That’s exactly what they’ve done with the new Buick Enclave Tuscan Edition, which brings the Tuscan design aesthetic “home” to your vehicle.
The Tuscan Edition, which will be available on the Leather and Premium trims of the 2016 Buick Enclave, will feature a special bronze-colored grille as well as bronze-tinted 20-inch wheels. You can get the Tuscan Edition in white, brown, or black exterior paint colors, all of which highlight the bronze tinting extremely well.
The Tuscan Edition recently debuted at the New York Auto Show, where it got to show off all that special bronze flare. On the Leather and Premium editions, you’ll also get a 3.6-liter V6 with 288 horsepower, a power moonroof, a heated steering wheel, Wi-Fi capability, and even a powered liftgate. Check out the Enclave on display in New York, then come see us at Paul Sur Buick GMC to learn how to take Tuscany with you wherever you go in the new Enclave.
|
from shapely.geometry import shape
from shapely.prepared import prep
from .geography import Geography
class Geometry(object):
def __init__(self, obj, **kwargs):
# stored internally as shapely
if isinstance(obj, dict):
self._shapely_data = obj # keep geojson as is, dont convert to shapely until needed
elif kwargs:
self._shapely_data = kwargs # keep geojson as is, dont convert to shapely until needed
elif "shapely" in type(obj):
self._shapely_data = obj
elif isinstance(obj, Geometry):
self._shapely_data = obj._shapely_data
else:
raise Exception()
self._prepped_data = None
@property
def _shapely(self):
'shapely object is needed, converted from geojson if needed'
if isinstance(self._shapely_data, dict):
self._shapely_data = shape(self._shapely_data)
return self._shapely_data
@property
def _prepped(self):
'prepared geometry for faster ops, created if needed'
if not self._prepped_data:
self._prepped_data = prep(self._shapely)
return self._prepped_data
@property
def __geo_interface__(self):
if isinstance(self._shapely_data, dict):
# if shapely not created yet, return directly from geojson
return self._shapely_data
else:
return self._shapely_data.__geo_interface__
@property
def type(self):
return self.__geo_interface__["type"]
@property
def coordinates(self):
return self.__geo_interface__["coordinates"]
@property
def geoms(self):
for geoj in self.__geo_interface__["geometries"]:
yield Geometry(geoj)
@property
def is_empty(self):
return True if not self._shapely_data else self._shapely.is_empty
# calculations
def area(self, geodetic=False):
if geodetic:
geog = Geography(self.__geo_interface__)
return geog.area
else:
return self._shapely.area
def length(self, geodetic=False):
if geodetic:
geog = Geography(self.__geo_interface__)
return geog.length
else:
return self._shapely.length
def distance(self, other, geodetic=False):
if geodetic:
geog = Geography(self.__geo_interface__)
other = Geography(other.__geo_interface__)
return geog.distance(other)
else:
other = Geometry(other)
return self._shapely.distance(other)
# tests
# TODO: Maybe implement batch ops via prepped, or should that be handled higher up...?
def intersects(self, other):
return self._shapely.intersects(other._shapely)
def disjoint(self, other):
return self._shapely.disjoint(other._shapely)
def touches(self, other):
return self._shapely.touches(other._shapely)
# modify
def walk(self):
pass
def line_to(self):
pass
def buffer(self, distance, resolution=100, geodetic=False):
if geodetic:
geog = Geography(self.__geo_interface__)
buff = geog.buffer(distance, resolution)
return Geometry(buff.__geo_interface__)
else:
return self._shapely.buffer(distance, resolution)
def intersection(self, other):
return self._shapely.intersection(other._shapely)
def union(self, other):
return self._shapely.union(other._shapely)
def difference(self, other):
return self._shapely.difference(other._shapely)
|
At the end of a long winter, there are myriad things businesses should do to maintain the exteriors of their buildings. They may want to bring in a professional landscaping company to remove dead plants and trees and replace them with new ones. They should also walk around their buildings and see if any cracks have formed in their foundations or their siding. However, one of the most important steps they can take is having their roofs inspected by a professional roofing company.
During the winter, there are all kinds of problems that can pop up and affect the integrity of a roof. Windy conditions can blow shingles off it. Snow and ice can cause roof lifting and leaking. Water drains can become clogged with leaves that weren’t removed in the fall, and branches can leave damage behind when they fall from trees. At first glance, the roof on a business might look nice, but there could be all kinds of underlying issues that could lead to further problems down the line.
Once spring has officially sprung, you should call a roofing company if you are a business owner and ask to schedule a roof inspection. The company should be able to identify any problem areas on your roof and recommend steps you can take to ensure your roof is safe. By taking preventative measures, you could save yourself a lot of time and money in the long run, and you can prevent the winter weather from wreaking havoc on your business even as you prepare to head into the summer.
Would you like to schedule a roof inspection? Ray Roofing Supply would be happy to come out to your business and give your roof a look so that you don’t have to worry about it between now and next winter. You should get into the habit of inspecting your roof at least once every year, if not more often. Call us at 330-452-8109 today to take advantage of our roofing services.
|
import numpy
import random
from OpenGL.GL import *
import glew_wrap as glew
from Canvas import moltextureCanvas, haloCanvas
from OctaMap import octamap
from trackball import glTrackball
from quaternion import quaternion
from CgUtil import cgSettings
import hardSettings
import ShadowMap
from ShadowMap import AOgpu2
import struct
from MDAnalysis import *
import molGL
TOO_BIG = 0
TOO_SMALL = 1
SIZE_OK = 2
def getAtomRadius(atom, coarse_grain = False):
E2R = {"F": 1.47, "CL": 1.89, "H": 1.10, "C":1.548, "N": 1.4, "O":1.348, "P":1.88, "S":1.808, "CA":1.948, "FE":1.948, "ZN": 1.148, "I": 1.748}
rad = E2R.get(atom[:1], 0)
if rad == 0: rad = E2R.get(atom[:2], 0)
if rad == 0: 1.5
if coarse_grain: rad = 2.35
return rad
def getAtomColor(atom):
E2C = {"H": 0xFFFFFF,
"HE": 0xFFC0CB,
"LI": 0xB22222,
"BE": 0xFF1493,
"B": 0x00FF00,
"C": 0x808080,
"N": 0x8F8FFF,
"O": 0xF00000,
"F": 0xDAA520,
"NE": 0xFF1493,
"NA": 0x0000FF,
"MG": 0x228B22,
"AL": 0x808090,
"SI": 0xDAA520,
"P": 0xFFA500,
"S": 0xFFC832,
"CL": 0x00FF00,
"AR": 0xFF1493,
"K": 0xFF1493,
"CA": 0x808090,
"SC": 0xFF1493,
"TI": 0x808090,
"V": 0xFF1493,
"CR": 0x808090,
"MN": 0x808090,
"FE": 0xFFA500,
"CO": 0xFF1493,
"NI": 0xA52A2A,
"CU": 0xA52A2A,
"ZN": 0xA52A2A}
E2C_coarse = {"NC3": 0x00CC00 ,"PO4": 0x6600CC, "GL": 0xFFFF33, "W": 0x0000CC}
E2C.update(E2C_coarse)
color = E2C.get(atom, 0)
if color == 0: color = E2C.get(atom[:2], 0)
if color == 0: color = E2C.get(atom[:1], 0)
color_int = [ord(val) for val in struct.unpack("cccc", struct.pack("i", color))]
return numpy.array(color_int[1:])/255.
def convert_color(color):
color_int = [ord(val) for val in struct.unpack("cccc", struct.pack("i", color))]
return numpy.array(color_int[1:])/255.
# XXX This class isn't actually used, since everything is in numpy arrays and the drawing is done in C code
class Atom:
def __init__(self, atomid, name):
self.id = atomid
self.r = getAtomRadius(name)
self.col = numpy.array(getAtomColor(name))/255.
def Draw(self):
r = self.r
p = self.pos[self.id]
col = self.col
glColor3f(col[0],col[1],col[2])
glTexCoord2f(self.tx/moltextureCanvas.GetHardRes(),self.ty/moltextureCanvas.GetHardRes())
glNormal3f(1,1,r)
glVertex3f(p[0],p[1],p[2])
glNormal3f(-1,+1, r)
glVertex3f(p[0],p[1],p[2])
glNormal3f(-1,-1, r)
glVertex3f(p[0],p[1],p[2])
glNormal3f(+1,-1, r)
glVertex3f(p[0],p[1],p[2])
def FillTexture(self,texture, texsize):
octamap.FillTexture(texture, texsize, self.tx, self.ty, self.col[0], self.col[1], self.col[2])
def AssignNextTextPos(self, texsize):
self.tx = lx
self.ty = ly
if (lx+octamap.TotTexSizeX()>texsize) or (ly+octamap.TotTexSizeY()>texsize): return False
lx += octamap.TotTexSizeX()
if (lx+octamap.TotTexSizeX()>texsize):
ly+=octamap.TotTexSizeY()
lx=0
return True
def DrawOnTexture(self, CSIZE, px, py, pz, r):
glColor3f(ShadowMap.myrand(), ShadowMap.myrand(), ShadowMap.myrand())
h = 0.0
Xm = -1.0-1.0/CSIZE
Xp = 1.0+1.0/CSIZE
Ym=Xm
Yp=Xp
glew.glMultiTexCoord4fARB(glew.GL_TEXTURE1_ARB, px,py,pz,r)
glTexCoord2f(Xm,Ym); glVertex2f(-h+self.tx, -h+self.ty)
glTexCoord2f(Xp,Ym); glVertex2f(-h+self.tx+CSIZE,-h+self.ty)
glTexCoord2f(Xp,Yp); glVertex2f(-h+self.tx+CSIZE,-h+self.ty+CSIZE)
glTexCoord2f(Xm,Yp); glVertex2f(-h+self.tx, -h+self.ty+CSIZE)
def DrawShadowmap(self):
r = self.r
px, py, pz = self.pos[self.id]
#if ((!geoSettings.showHetatm)&&(hetatomFlag)): return
glNormal3f(+1,+1, r)
glVertex3f(px,py,pz)
glNormal3f(-1,+1, r)
glVertex3f(px,py,pz)
glNormal3f(-1,-1, r)
glVertex3f(px,py,pz)
glNormal3f(+1,-1, r)
glVertex3f(px,py,pz)
def DrawHalo(self, r, px, py, pz):
#r = self.r
#px, py, pz = self.pos[self.id]
#if ((!geoSettings.showHetatm)&&(hetatomFlag)) return
s=cgSettings.P_halo_size * 2.5
glew.glMultiTexCoord2fARB(glew.GL_TEXTURE1_ARB, r+s, (r+s)*(r+s) / (s*s+2*r*s))
glTexCoord2f(+1,+1)
glVertex3f(px,py,pz)
glTexCoord2f(-1,+1)
glVertex3f(px,py,pz)
glTexCoord2f(-1,-1)
glVertex3f(px,py,pz)
glTexCoord2f(+1,-1)
glVertex3f(px,py,pz)
class Molecule:
def __init__(self,filename,istrj = True,coarse_grain=False):
self.r = 0 # default scaling factor for system
self.pos = numpy.zeros(3) # center of bounding box
self.orien = quaternion([0,0,-1,0]) # orientation in space
self.scaleFactor = 1
self.idx = None
self.DirV = []
self.istrj = istrj
self.coarse_grain = coarse_grain
self.clipplane = numpy.array([0.,0.,0.,0,], numpy.float32)
self.excl = numpy.array([], numpy.int32)
if not istrj: self.load_pdb(filename)
else: self.load_trj(filename)
def load_pdb(self,filename):
infile = file(filename)
coords = []
radii = []
colors = []
radii = []
for i,line in enumerate(infile):
if not (line[:4] == "ATOM" or line[:6] == "HETATM"): continue
name = line[13:16]
x, y, z = float(line[30:38]),float(line[38:46]),float(line[46:54])
coords.append((x,y,z))
radii.append(getAtomRadius(name, self.coarse_grain))
colors.append(getAtomColor(name))
self.numatoms = len(coords)
self.atompos = numpy.array(coords, numpy.float32)
self.colors = numpy.array(colors, numpy.float32)
self.radii = numpy.array(radii, numpy.float32)
# Calculate bounding box
min = numpy.minimum.reduce(self.atompos)
max = numpy.maximum.reduce(self.atompos)
pos = (min+max)/2
self.r = 0.5*numpy.sqrt(numpy.sum(numpy.power(max-min-4,2)))
self.pos = pos
self.min, self.max = min-pos, max-pos
self.textureAssigned = False
self.textures = numpy.ones((self.numatoms, 2), numpy.float32)
self.ReassignTextureAutosize()
self.ResetAO()
def load_trj(self,prefix):
universe = AtomGroup.Universe(prefix+".psf", prefix+".dcd")
print "Finished loading psf"
self.universe = universe
#self.atompos = numpy.asarray(universe.dcd.ts._pos).T
self.atompos = universe.dcd.ts._pos
self.sel = universe
self.idx = self.sel.atoms.indices()
self.numatoms = universe.atoms.numberOfAtoms()
print "Finished selection"
radii = [getAtomRadius(a.name, self.coarse_grain) for a in universe.atoms]
colors = [getAtomColor(a.name) for a in universe.atoms]
self.colors = numpy.array(colors, numpy.float32)
self.radii = numpy.array(radii, numpy.float32)
# This is the old way for using Vertex arrays - it might still be faster if I can use indexes arrays
# or vertex buffer objects
# see glDrawElements so I don't have to duplicate everything by 4
#verts = numpy.transpose(universe.dcd.ts._pos)
#self.atompos = numpy.repeat(verts, 4, axis=0)
# Set up vertex arrays
#glVertexPointer(3, GL_FLOAT, 0, self.atompos)
#glEnableClientState(GL_VERTEX_ARRAY)
#glNormalPointer(GL_FLOAT, 0, self.normals)
#glEnableClientState(GL_NORMAL_ARRAY)
#glColorPointer(3,GL_FLOAT, 0, self.colors)
#glEnableClientState(GL_COLOR_ARRAY)
# Calculate bounding box
min = numpy.minimum.reduce(self.atompos)
max = numpy.maximum.reduce(self.atompos)
pos = (min+max)/2
self.r = 0.5*numpy.sqrt(numpy.sum(numpy.power(max-min-4,2)))
self.pos = pos
self.min, self.max = min-pos, max-pos
# for drawing lines
if hasattr(self.universe, "_bonds"):
self.bonds = numpy.array(self.universe._bonds)
self.textureAssigned = False
self.textures = numpy.ones((self.numatoms, 2), numpy.float32)
self.ReassignTextureAutosize()
self.ResetAO()
# this is for trajectory averaging
self.new_ts = self.universe.dcd.ts._pos
self.averaging = 1
def read_next_frame(self):
if self.istrj:
currframe = self.universe.dcd.ts.frame
if currframe == len(self.universe.dcd): currframe = 0
ts = self.universe.dcd[currframe] # this looks weird, but currframe is 1-indexed
if self.averaging > 1 and not ts.frame > len(self.universe.dcd)-self.averaging:
self.new_ts *= 0
self.new_ts += self.atompos
for ts in self.universe.dcd[currframe+1:currframe+self.averaging]:
self.new_ts += self.atompos
ts.frame = currframe+1
self.atompos[:] = self.new_ts/self.averaging
def read_previous_frame(self):
if self.istrj:
currframe = self.universe.dcd.ts.frame-1
self.universe.dcd[currframe-1]
def ReassignTextureAutosize(self):
if (self.textureAssigned): return
guess = hardSettings.TSIZE
lastThatWorked = guess
enlarge = False; shrink = False; forced = False
while True:
if (enlarge and shrink): forced = True
moltextureCanvas.SetRes(guess)
lastThatWorked = guess
res = SetCsize(guess, self.numatoms)
if not forced:
if ((res==TOO_BIG) and (guess/2 >= 16)):
shrink = True
guess /= 2
continue
if ((res == TOO_SMALL) and (guess*2 <= hardSettings.MAX_TSIZE)):
enlarge = True
guess *= 2
continue
octamap.SetSize(hardSettings.CSIZE)
self.ReassignTexture(guess)
break
# Rebuild texture arrays
#glTexCoordPointer(2, GL_FLOAT, 0, self.textures)
#glEnableClientState(GL_TEXTURE_COORD_ARRAY)
def ReassignTexture(self, texsize):
lx = ly = 0
# assign texture positions
textures = []
for i in range(self.numatoms):
textures.append((lx, ly))
if (lx+octamap.TotTexSizeX()>texsize) or (ly+octamap.TotTexSizeY()>texsize): raise Exception
lx += octamap.TotTexSizeX()
if (lx+octamap.TotTexSizeX()>texsize):
ly+=octamap.TotTexSizeY()
lx=0
self.textures = numpy.array(textures, numpy.float32)
def DrawLines(self):
r = self.r * self.scaleFactor
px, py, pz = self.pos
glPushMatrix()
glScalef(1./r,1./r,1./r)
glMultMatrixd((glTrackball.quat * self.orien).asRotation())
glTranslatef(-px, -py, -pz)
glDisable(glew.GL_VERTEX_PROGRAM_ARB)
glDisable(glew.GL_FRAGMENT_PROGRAM_ARB)
glBegin(GL_LINES)
molGL.molDrawSticks(self.atompos, self.bonds, self.colors, self.clipplane)
glEnd()
glPopMatrix()
def Draw(self):
r = self.r * self.scaleFactor
px, py, pz = self.pos
glPushMatrix()
glScalef(1./r,1./r,1./r)
glMultMatrixd((glTrackball.quat * self.orien).asRotation())
glTranslatef(-px, -py, -pz)
#glClipPlane(GL_CLIP_PLANE0, self.clipplane)
x = glGetFloatv(GL_MODELVIEW_MATRIX)
scalef = extractCurrentScaleFactor_x(x)
glew.glProgramEnvParameter4fARB(glew.GL_VERTEX_PROGRAM_ARB,0,scalef,0,0,0)
glEnable(glew.GL_VERTEX_PROGRAM_ARB)
glEnable(glew.GL_TEXTURE_2D)
glew.glActiveTextureARB(glew.GL_TEXTURE0_ARB)
moltextureCanvas.SetAsTexture()
if cgSettings.P_shadowstrenght>0:
ShadowMap.GetCurrentPVMatrix()
ShadowMap.FeedParameters()
for i in range(3):
glew.glProgramEnvParameter4fARB(glew.GL_FRAGMENT_PROGRAM_ARB, i,
x[i][0],x[i][1],x[i][2],0)
glew.glProgramEnvParameter4fARB(glew.GL_FRAGMENT_PROGRAM_ARB, 6,
self.PredictAO(),0,0,0)
glEnable(glew.GL_VERTEX_PROGRAM_ARB)
glEnable(glew.GL_FRAGMENT_PROGRAM_ARB)
glBegin(GL_QUADS)
molGL.MolDraw(self.atompos, self.radii, self.textures/moltextureCanvas.GetHardRes(), self.colors, self.clipplane, self.excl, self.idx)
glEnd()
#glDrawArrays(GL_QUADS, 0, self.numatoms)
glDisable(glew.GL_VERTEX_PROGRAM_ARB)
glDisable(glew.GL_FRAGMENT_PROGRAM_ARB)
# Draw wireframe for clipplane
if not numpy.allclose(self.clipplane, 0):
clipplane = self.clipplane
glColor(0.5, 0.5, 0.5)
glBegin(GL_LINE_STRIP)
glVertex3f(px-r, clipplane[3], pz-r)
glVertex3f(px-r, clipplane[3], pz+r)
glVertex3f(px+r, clipplane[3], pz+r)
glVertex3f(px+r, clipplane[3], pz-r)
glVertex3f(px-r, clipplane[3], pz-r)
glEnd()
glPopMatrix()
def DrawShadowmap(self,invert,shadowSettings):
r = self.r * self.scaleFactor
px, py, pz = self.pos
glPushMatrix()
glScalef(1./r,1./r, 1./r)
glMultMatrixd((glTrackball.quat * self.orien).asRotation())
glTranslate(-px, -py, -pz)
#glClipPlane(GL_CLIP_PLANE0, self.clipplane)
scalef=extractCurrentScaleFactor()
glew.glProgramEnvParameter4fARB(glew.GL_VERTEX_PROGRAM_ARB, 0, scalef,0,0,0)
glEnable(glew.GL_VERTEX_PROGRAM_ARB)
glEnable(glew.GL_FRAGMENT_PROGRAM_ARB)
glew.glActiveTextureARB(glew.GL_TEXTURE0_ARB)
glDisable(GL_TEXTURE_2D)
glew.glActiveTextureARB(glew.GL_TEXTURE1_ARB)
glDisable(GL_TEXTURE_2D)
shadowSettings.BindShaders()
glBegin(GL_QUADS)
molGL.MolDrawShadow(self.atompos, self.radii, self.clipplane, self.excl, self.idx)
glEnd()
#glDisableClientState(GL_COLOR_ARRAY)
#glDisableClientState(GL_TEXTURE_COORD_ARRAY)
#glDrawArrays(GL_QUADS, 0, self.numatoms)
#glEnableClientState(GL_COLOR_ARRAY)
#glEnableClientState(GL_TEXTURE_COORD_ARRAY)
#if (sticks):
# pass
glPopMatrix()
def DrawHalos(self):
# let's try to aviod THIS!
# Moved to drawFrame()
#shadowmap.prepareDepthTextureForCurrentViewpoint() # hum, unavoidable.
r = self.r * self.scaleFactor
px, py, pz = self.pos
glPushMatrix()
glScalef(1/r,1/r,1/r)
glMultMatrixd((glTrackball.quat * self.orien).asRotation())
glTranslatef(-px,-py,-pz)
#glClipPlane(GL_CLIP_PLANE0, self.clipplane)
x = glGetFloatv(GL_MODELVIEW_MATRIX)
scalef = extractCurrentScaleFactor_x(x)
glew.glProgramEnvParameter4fARB(glew.GL_VERTEX_PROGRAM_ARB, 0,scalef, 0,0,0)
glEnable(glew.GL_VERTEX_PROGRAM_ARB)
glEnable(glew.GL_FRAGMENT_PROGRAM_ARB)
glDepthMask(False)
glEnable(GL_BLEND)
if (cgSettings.doingAlphaSnapshot): glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)
else: glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA)
cgSettings.BindHaloShader( haloCanvas.getResPow2() )
glew.glProgramEnvParameter4fARB(glew.GL_FRAGMENT_PROGRAM_ARB, 0,
(100.0+cgSettings.P_halo_aware*1300.0)/scalef/r, 0,0,0)
glBegin(GL_QUADS)
molGL.MolDrawHalo(self.atompos, self.radii, cgSettings.P_halo_size, self.clipplane, self.excl, self.idx)
glEnd()
glDisable(GL_BLEND)
cgSettings.BindShaders()
glDepthMask(True)
glPopMatrix()
glDisable(glew.GL_VERTEX_PROGRAM_ARB)
glDisable(glew.GL_FRAGMENT_PROGRAM_ARB)
def DrawOnTexture(self):
glEnable(GL_BLEND)
glBlendFunc(GL_ONE,GL_ONE)
glMatrixMode(GL_PROJECTION)
glPushMatrix()
glLoadIdentity()
glOrtho(0,moltextureCanvas.GetSoftRes(),0,moltextureCanvas.GetSoftRes(), 0,1)
glMatrixMode(GL_MODELVIEW)
glPushMatrix()
glLoadIdentity()
lastviewport = glGetIntegerv(GL_VIEWPORT)
glViewport(0,0,moltextureCanvas.GetSoftRes(),moltextureCanvas.GetSoftRes())
glew.glActiveTextureARB(glew.GL_TEXTURE1_ARB)
glDisable(GL_TEXTURE_2D)
glew.glActiveTextureARB(glew.GL_TEXTURE0_ARB)
glDisable(GL_TEXTURE_2D)
glBegin(GL_QUADS)
molGL.MolDrawOnTexture(self.atompos, self.radii, self.textures, hardSettings.CSIZE, self.idx)
glEnd()
#if (self.sticks):
# pass
glMatrixMode(GL_PROJECTION)
glPopMatrix()
glMatrixMode(GL_MODELVIEW)
glPopMatrix()
glViewport(lastviewport[0],lastviewport[1],lastviewport[2],lastviewport[3])
return lastviewport
def PrepareAOstep(self, nsteps, shadowmap):
if not self.DoingAO(): return True
if not self.AOstarted: self.PrepareAOstart()
AOgpu2.Bind()
if ShadowMap.validView(self.DirV[self.AOdoneLvl]): ao = AOgpu2(self.DirV[self.AOdoneLvl], self, len(self.DirV), shadowmap)
AOgpu2.UnBind()
self.AOdoneLvl += 1
return (self.AOdoneLvl >= len(self.DirV))
# for testing
def PrepareAOSingleView(self, shadowmap, static_i=[0]):
self.PrepareAOstart()
AOgpu2.Bind()
ao = AOgpu2(self.DirV[static_i[0]], self, 4, shadowmap)
static_i[0] += 1
if (static_i[0] > len(self.DirV)): static_i[0] = 0
AOgpu2.UnBind()
self.AOdoneLvl = len(self.DirV)
def PrepareAOstart(self):
self.AOdoneLvl = 0
AOgpu2.Reset(self)
self.AOstarted = True
if (len(self.DirV) == 0):
# generate probe views
self.DirV = ShadowMap.GenUniform(hardSettings.N_VIEW_DIR)
# mix them up
numpy.random.shuffle(self.DirV)
def ResetAO(self):
self.AOready = False
self.AOstarted = False
self.AOdoneLvl = 0
#self.DirV = []
def DoingAO(self):
if (cgSettings.P_texture == 0): return False
if (len(self.DirV) == 0): return True
return self.AOdoneLvl < len(self.DirV)
def DecentAO(self):
k = 1.
if (self.AOdoneLvl>=len(self.DirV)): return True
else: return False # XXX
if (self.numatoms<10): return (self.AOdoneLvl>6*k)
if (self.numatoms<100): return (self.AOdoneLvl>4*k)
if (self.numatoms<1000): return (self.AOdoneLvl>2*k)
if (self.numatoms<10000): return (self.AOdoneLvl>1*k)
return True
def PredictAO(self):
# multiplicative prediction
if self.AOstarted == False: return 1.0
else:
coeff = 0.25+(self.AOdoneLvl-1)/20.
if (coeff > 1.0): coeff = 1.0
return coeff*len(self.DirV)*1.0/self.AOdoneLvl
def extractCurrentScaleFactor():
x = glGetFloatv(GL_MODELVIEW_MATRIX)
scalef=numpy.power(numpy.abs(numpy.linalg.det(x)),1./3.)
return scalef
def extractCurrentScaleFactor_x(x):
return numpy.power(numpy.abs(numpy.linalg.det(x)),1./3.)
def SetCsize(textsize, natoms):
# initial guess
i = numpy.ceil(numpy.sqrt(natoms))
hardSettings.CSIZE = textsize / int(i)
if (hardSettings.CSIZE > 250):
hardSettings.CSIZE = 250
return TOO_BIG
if (hardSettings.CSIZE < 6):
hardSettings.CSIZE = 6
return TOO_SMALL
return SIZE_OK
|
It's hard to beat Park Slope for a day of family fun. With its tree- and brownstone-lined streets, high-performing public schools, destination playgrounds, top restaurants, and a host of cultural and kids' activities—not to mention all the natural beauty and kid-friendly things to do in Prospect Park—Park Slope is one of the best family neighborhoods in NYC.
The Slope is a true community: Local shopkeepers know their regulars by name, neighbors sit and chat on their front stoops, and parents share info and swap baby items on the massive online board Park Slope Parents. For our purposes, we're defining Park Slope as stretching from Fourth Avenue to the eastern side of Prospect Park, and from Flatbush Avenue south to Prospect Avenue. Here are our top 50 things to do with kids in Park Slope, from culture to eating, shopping, and playing.
Giocare Play Spot in South Slope hosts drop-in play and various classes. Photo by Diana Kim.
1. Drop in on a rainy day for play and art-making at one of Park Slope's many baby- and toddler-friendly playspaces, including Giocare Play Spot, Good Day Play Cafe, and the drop-in playspace at Congregation Beth Elohim.
2. Hit one of the three playgrounds along Prospect Park West in Prospect Park: Run through the harp-shaped sprinkler in Harmony Playground, bring your toddler to slide and climb at the Garfield Tot Lot, or dig in the sandbox at the Third Street Playground.
3. Venture a little further into Prospect Park and set kids free to roam and play at the Zucker Natural Exploration Area in Prospect Park, a unique playground made from reclaimed natural materials.
4. Visit the wonderful (and compact) Prospect Park Zoo.
5. Ride the park's vintage carousel, and explore the Lefferts Historic House, with its calendar full of kid-friendly programs.
6. Skate, boat, and splash at the LeFrak Center in Prospect Park.
7. Swim at the Prospect Park YMCA or take a class and run the indoor track at the Park Slope Armory Y.
8. Run around in the sprinklers at JJ Byrne Playground, hit the swings, or kick a ball around in the generously-sized artificial turf areas.
9. Take music and art classes and enjoy kids' concerts at the Hootenanny Art House.
10. Head to Brooklyn Robot Foundry for hands-on, STEM-fueled projects and activities, including family robot-building sessions on weekends.
11. Hone your writing and crime-fighting abilities at Brooklyn Superhero Supply Co., where you'll find the writing program 826NYC behind a secret door.
12. Try the experiments and projects offered at The Tiny Scientist, which hosts workshops, classes, and after school programs designed to foster a love of science.
13. Dance and act at Brooklyn Arts Exchange, which has classes for every age level.
14. Hone your critical and reasoning skills at the Brooklyn Game Lab, where kids (and adults) play creative board games.
15. Take an art, music, dance, or theater class or just enjoy the play space at Kidville.
16. Spark your creativity with pottery making and painting at The Painted Pot.
17. Try your hand at the ancient art of origami at Taro's Origami Studio.
18. Sing, dance, and make believe at Spoke the Hub.
19. Get the wiggles out with your toddler at a drop-in music class or story time at Ume Ume.
Alice and the white rabbit look for the stolen tarts at Puppetworks. Photo by TA Smith/courtesy the theater.
20. See puppet shows and hang with the custom-made marionettes at local kid-theater institution Puppetworks.
21. All summer long, catch free outdoor performances at the Prospect Park bandshell courtesy of Celebrate Brooklyn!
From the playground to outdoor festivals to theater, the Old Stone House is a community gathering spot. Photo by Bob Levine/courtesy of OSH.
22. Attend a family concert, explore the garden, or check out community art exhibits at the historic Old Stone House.
23. Catch classical and kids' music concerts at the Brooklyn Conservatory of Music, or sign up for music lessons.
24. Go for story hour in the garden or curl up with a book in the lovely children's area at the Park Slope Library.
25. Check out the giant, student-made insect sculptures clinging to the courtyard wall of P.S. 107.
26. Catch the family improv show TheatreSports! and check out the musicals and performances at The Gallery Players theater.
28. Hunt for animal statues, such as bronze panthers, or historical monuments like the statue of Marquis de Lafayette in Prospect Park.
29. Attend a family concert by local mom and celebrity musician Suzi Shelton at various venues in the neighborhood.
30. Catch a weekend afternoon music performance at the one-of-a-kind Barbes—from jazz to tango, there's something new every time.
31. See a student production of Brooklyn Acting Lab, which offers theater workshops for kids year-round.
32. Indulge your sweet tooth at The Chocolate Room.
33. Down a slice or two at Pizza Plus, which also has vegan and gluten-free options.
34. Barbecue with your family at a designated spot in Prospect Park, or pack a basket and head to the tables outside Prospect Park's Picnic House.
Tuck into some roti at Talde's family-friendly brunch. Photo courtesy of the restaurant.
36. Enjoy an Asian-fusion brunch at Talde.
37. Indulge in sweet cakes, cupcakes, macaroons, and cookies at Buttermilk Bake Shop, where you can also take baking classes and throw parties.
38. Cool off with fresh frozen yogurt made on site with a variety of fresh and sweet toppings at Culture, or locally-sourced housemade gelato at the nearby L'Albero dei Gelati.
39. With a location right off Prospect Park in Park Slope, Dizzy's is the quintessential Park Slope diner for a casual meal.
40. Polish off some empanadas, tacos, and arepas at lively, supremely family-friendly Latin American bistro Bogota.
41. Check out the handcrafted toys at Norman and Jules Toy Shop and stop by the meticulously landscaped backyard for free play or a kids' concert.
42. Prowl the bookshelves or catch a reading or storytime at Stories Bookshop and Storytelling Lab, whose calendar features both published authors and young writers.
43. Play with the train set and critter playhouse outside of Little Things Toy Store.
44. Browse the kids' section and enjoy storytime at the Eighth Avenue outpost of Powerhouse Bookstore.
45. Settle into a comfy chair and read a book in the children's section at one of Park Slope's oldest independent bookstores, The Community Bookstore.
46. Get junior's hair cut at LuLu's Cuts and Toys, and leave with a balloon, lollipop, and new 'do.
47. Buy fresh, locally sourced produce at the Park Slope Farmers' Market or the Grand Army Plaza Greenmarket—or, if you're a local, join the Park Slope Food Coop.
48. Find cool crafts, unique toys, and great goodie bag trinkets at Toy Space.
49. Head downstairs to the giant children's section at the neighborhood Barnes & Noble to read and socialize.
50. Take your tween or teen to Beacon's Closet or Life Vintage and Thrift for stylish and trendy thrift shopping.
This article was first published in January 2012 but has since been updated.
|
import numpy as np
class HeadEquation:
def equation(self):
'''Mix-in class that returns matrix rows for head-specified conditions.
(really written as constant potential element)
Works for nunknowns = 1
Returns matrix part nunknowns,neq,npval, complex
Returns rhs part nunknowns,nvbc,npval, complex
Phi_out - c*T*q_s = Phi_in
Well: q_s = Q / (2*pi*r_w*H)
LineSink: q_s = sigma / H = Q / (L*H)
'''
mat = np.empty((self.nunknowns, self.model.neq,
self.model.npval), 'D')
# rhs needs be initialized zero
rhs = np.zeros((self.nunknowns, self.model.ngvbc,
self.model.npval), 'D')
for icp in range(self.ncp):
istart = icp * self.nlayers
ieq = 0
for e in self.model.elementlist:
if e.nunknowns > 0:
mat[istart: istart + self.nlayers,
ieq: ieq + e.nunknowns, :] = e.potinflayers(
self.xc[icp], self.yc[icp], self.layers)
if e == self:
for i in range(self.nlayers):
mat[istart + i, ieq + istart + i, :] -= \
self.resfacp[istart + i] * \
e.dischargeinflayers[istart + i]
ieq += e.nunknowns
for i in range(self.model.ngbc):
rhs[istart: istart + self.nlayers, i, :] -= \
self.model.gbclist[i].unitpotentiallayers(
self.xc[icp], self.yc[icp], self.layers)
if self.type == 'v':
iself = self.model.vbclist.index(self)
for i in range(self.nlayers):
rhs[istart + i, self.model.ngbc + iself, :] = \
self.pc[istart + i] / self.model.p
return mat, rhs
class WellBoreStorageEquation:
def equation(self):
'''Mix-in class that returns matrix rows for multi-aquifer element with
total given discharge, uniform but unknown head and
InternalStorageEquation
'''
mat = np.zeros((self.nunknowns, self.model.neq,
self.model.npval), 'D')
rhs = np.zeros((self.nunknowns, self.model.ngvbc,
self.model.npval), 'D')
ieq = 0
for e in self.model.elementlist:
if e.nunknowns > 0:
head = e.potinflayers(self.xc[0], self.yc[0], self.layers) / \
self.aq.T[self.layers][:, np.newaxis, np.newaxis]
mat[:-1, ieq: ieq + e.nunknowns, :] = head[:-1, :] - head[1:, :]
mat[-1, ieq: ieq + e.nunknowns, :] -= np.pi * self.rc**2 * \
self.model.p * head[0, :]
if e == self:
disterm = self.dischargeinflayers * self.res / (2 * np.pi *
self.rw * self.aq.Haq[self.layers][:, np.newaxis])
if self.nunknowns > 1: # Multiple layers
for i in range(self.nunknowns - 1):
mat[i, ieq + i, :] -= disterm[i]
mat[i, ieq + i + 1, :] += disterm[i + 1]
mat[-1, ieq: ieq + self.nunknowns, :] += \
self.dischargeinflayers
mat[-1, ieq, :] += \
np.pi * self.rc ** 2 * self.model.p * disterm[0]
ieq += e.nunknowns
for i in range(self.model.ngbc):
head = self.model.gbclist[i].unitpotentiallayers(
self.xc[0], self.yc[0], self.layers) / \
self.aq.T[self.layers][:, np.newaxis]
rhs[:-1, i, :] -= head[:-1, :] - head[1:, :]
rhs[-1, i, :] += np.pi * self.rc ** 2 * self.model.p * head[0, :]
if self.type == 'v':
iself = self.model.vbclist.index(self)
rhs[-1, self.model.ngbc + iself, :] += self.flowcoef
if self.hdiff is not None:
# head[0] - head[1] = hdiff
rhs[:-1, self.model.ngbc + iself, :] += \
self.hdiff[:, np.newaxis] / self.model.p
return mat, rhs
class HeadEquationNores:
def equation(self):
'''Mix-in class that returns matrix rows for head-specified conditions.
(really written as constant potential element)
Returns matrix part nunknowns, neq, npval, complex
Returns rhs part nunknowns, nvbc, npval, complex
'''
mat = np.empty((self.nunknowns, self.model.neq,
self.model.npval), 'D')
rhs = np.zeros((self.nunknowns, self.model.ngvbc,
self.model.npval), 'D')
for icp in range(self.ncp):
istart = icp * self.nlayers
ieq = 0
for e in self.model.elementlist:
if e.nunknowns > 0:
mat[istart: istart + self.nlayers,
ieq: ieq + e.nunknowns, :] = e.potinflayers(
self.xc[icp], self.yc[icp], self.layers)
ieq += e.nunknowns
for i in range(self.model.ngbc):
rhs[istart: istart + self.nlayers, i, :] -= \
self.model.gbclist[i].unitpotentiallayers(
self.xc[icp], self.yc[icp], self.layers)
if self.type == 'v':
iself = self.model.vbclist.index(self)
for i in range(self.nlayers):
rhs[istart + i, self.model.ngbc + iself, :] = \
self.pc[istart + i] / self.model.p
return mat, rhs
class LeakyWallEquation:
def equation(self):
'''Mix-in class that returns matrix rows for leaky-wall condition
Returns matrix part nunknowns,neq,npval, complex
Returns rhs part nunknowns,nvbc,npval, complex
'''
mat = np.empty((self.nunknowns, self.model.neq,
self.model.npval), 'D')
rhs = np.zeros((self.nunknowns, self.model.ngvbc,
self.model.npval), 'D')
for icp in range(self.ncp):
istart = icp * self.nlayers
ieq = 0
for e in self.model.elementlist:
if e.nunknowns > 0:
qx, qy = e.disvecinflayers(self.xc[icp], self.yc[icp],
self.layers)
mat[istart: istart + self.nlayers,
ieq: ieq + e.nunknowns, :] = \
qx * self.cosout[icp] + qy * self.sinout[icp]
if e == self:
hmin = e.potinflayers(
self.xcneg[icp], self.ycneg[icp], self.layers) / \
self.aq.T[self.layers][: ,np.newaxis, np.newaxis]
hplus = e.potinflayers(
self.xc[icp], self.yc[icp], self.layers) / \
self.aq.T[self.layers][:, np.newaxis, np.newaxis]
mat[istart:istart + self.nlayers,
ieq: ieq + e.nunknowns, :] -= \
self.resfac[:, np.newaxis, np.newaxis] * \
(hplus - hmin)
ieq += e.nunknowns
for i in range(self.model.ngbc):
qx, qy = self.model.gbclist[i].unitdisveclayers(
self.xc[icp], self.yc[icp], self.layers)
rhs[istart: istart + self.nlayers, i, :] -= \
qx * self.cosout[icp] + qy * self.sinout[icp]
#if self.type == 'v':
# iself = self.model.vbclist.index(self)
# for i in range(self.nlayers):
# rhs[istart+i,self.model.ngbc+iself,:] = \
# self.pc[istart+i] / self.model.p
return mat, rhs
class MscreenEquation:
def equation(self):
'''Mix-in class that returns matrix rows for multi-screen conditions
where total discharge is specified.
Works for nunknowns = 1
Returns matrix part nunknowns, neq, npval, complex
Returns rhs part nunknowns, nvbc, npval, complex
head_out - c * q_s = h_in
Set h_i - h_(i + 1) = 0 and Sum Q_i = Q'''
mat = np.zeros((self.nunknowns, self.model.neq,
self.model.npval), 'D')
rhs = np.zeros((self.nunknowns, self.model.ngvbc,
self.model.npval), 'D')
ieq = 0
for icp in range(self.ncp):
istart = icp * self.nlayers
ieq = 0
for e in self.model.elementlist:
if e.nunknowns > 0:
head = e.potinflayers(
self.xc[icp], self.yc[icp], self.layers) / \
self.aq.T[self.layers][:,np.newaxis,np.newaxis]
mat[istart: istart + self.nlayers - 1,
ieq: ieq + e.nunknowns, :] = \
head[:-1,:] - head[1:,:]
if e == self:
for i in range(self.nlayers-1):
mat[istart + i, ieq + istart + i, :] -= \
self.resfach[istart + i] * \
e.dischargeinflayers[istart + i]
mat[istart + i, ieq + istart + i + 1, :] += \
self.resfach[istart + i + 1] * \
e.dischargeinflayers[istart + i + 1]
mat[istart + i,
ieq + istart: ieq + istart + i + 1, :] -= \
self.vresfac[istart + i] * \
e.dischargeinflayers[istart + i]
mat[istart + self.nlayers - 1,
ieq + istart: ieq + istart + self.nlayers, :] = 1.0
ieq += e.nunknowns
for i in range(self.model.ngbc):
head = self.model.gbclist[i].unitpotentiallayers(
self.xc[icp], self.yc[icp], self.layers) / \
self.aq.T[self.layers][:, np.newaxis]
rhs[istart: istart + self.nlayers - 1, i, :] -= \
head[:-1,:] - head[1:,:]
if self.type == 'v':
iself = self.model.vbclist.index(self)
rhs[istart + self.nlayers - 1, self.model.ngbc + iself, :] = 1.0
# If self.type == 'z', it should sum to zero,
# which is the default value of rhs
return mat, rhs
class MscreenDitchEquation:
def equation(self):
'''Mix-in class that returns matrix rows for multi-screen conditions
where total discharge is specified.
Returns matrix part nunknowns,neq,npval, complex
Returns rhs part nunknowns,nvbc,npval, complex
head_out - c*q_s = h_in
Set h_i - h_(i+1) = 0 and Sum Q_i = Q
I would say
headin_i - headin_(i+1) = 0
headout_i - c*qs_i - headout_(i+1) + c*qs_(i+1) = 0
In case of storage:
Sum Q_i - A * p^2 * headin = Q
'''
mat = np.zeros((self.nunknowns, self.model.neq,
self.model.npval), 'D')
rhs = np.zeros((self.nunknowns, self.model.ngvbc,
self.model.npval), 'D')
ieq = 0
for icp in range(self.ncp):
istart = icp * self.nlayers
ieq = 0
for e in self.model.elementlist:
if e.nunknowns > 0:
head = e.potinflayers(
self.xc[icp], self.yc[icp], self.layers) / \
self.aq.T[self.layers][:, np.newaxis, np.newaxis]
if self.nlayers > 1:
mat[istart: istart + self.nlayers - 1,
ieq: ieq + e.nunknowns, :] = \
head[:-1, :] - head[1:, :]
# Store head in top layer in 2nd to last equation
# of this control point
mat[istart + self.nlayers - 1,
ieq: ieq + e.nunknowns, :] = head[0,:]
if e == self:
# Correct head in top layer in second to last equation
# to make it head inside
mat[istart + self.nlayers - 1,
ieq + istart, :] -= self.resfach[istart] * \
e.dischargeinflayers[istart]
if icp == 0:
istartself = ieq # Needed to build last equation
for i in range(self.nlayers-1):
mat[istart + i, ieq + istart + i, :] -= \
self.resfach[istart + i] * \
e.dischargeinflayers[istart + i]
mat[istart + i, ieq + istart + i + 1, :] += \
self.resfach[istart + i + 1] * \
e.dischargeinflayers[istart + i + 1]
#vresfac not yet used here; it is set to zero as
#I don't quite now what is means yet
#mat[istart + i, ieq + istart:ieq+istart+i+1,:] -= \
# self.vresfac[istart + i] * \
# e.dischargeinflayers[istart + i]
ieq += e.nunknowns
for i in range(self.model.ngbc):
head = self.model.gbclist[i].unitpotentiallayers(
self.xc[icp], self.yc[icp], self.layers) / \
self.aq.T[self.layers][:, np.newaxis]
if self.nlayers > 1:
rhs[istart: istart + self.nlayers - 1, i, :] -= \
head[:-1, :] - head[1:, :]
# Store minus the head in top layer in second to last equation
# for this control point
rhs[istart + self.nlayers - 1, i, :] -= head[0, :]
# Modify last equations
for icp in range(self.ncp - 1):
ieq = (icp + 1) * self.nlayers - 1
# Head first layer control point icp - Head first layer control
# point icp + 1
mat[ieq, :, :] -= mat[ieq + self.nlayers, :, :]
rhs[ieq, :, :] -= rhs[ieq + self.nlayers, :, :]
# Last equation setting the total discharge of the ditch
mat[-1, :, :] = 0.0
mat[-1, istartself: istartself + self.nparam, :] = 1.0
if self.Astorage is not None:
# Used to store last equation in case of ditch storage
matlast = np.zeros((self.model.neq, self.model.npval), 'D')
rhslast = np.zeros((self.model.npval), 'D')
ieq = 0
for e in self.model.elementlist:
head = e.potinflayers(self.xc[0], self.yc[0], self.layers) / \
self.aq.T[self.layers][:, np.newaxis, np.newaxis]
matlast[ieq: ieq + e.nunknowns] -= \
self.Astorage * self.model.p ** 2 * head[0, :]
if e == self:
# only need to correct first unknown
matlast[ieq] += self.Astorage * self.model.p ** 2 * \
self.resfach[0] * e.dischargeinflayers[0]
ieq += e.nunknowns
for i in range(self.model.ngbc):
head = self.model.gbclist[i].unitpotentiallayers(
self.xc[0], self.yc[0], self.layers) / \
self.aq.T[self.layers][:, np.newaxis]
rhslast += self.Astorage * self.model.p ** 2 * head[0]
mat[-1] += matlast
rhs[-1, :, :] = 0.0
if self.type == 'v':
iself = self.model.vbclist.index(self)
rhs[-1, self.model.ngbc + iself, :] = 1.0
# If self.type == 'z', it should sum to zero, which is the default
# value of rhs
if self.Astorage is not None:
rhs[-1, self.model.ngbc + iself, :] += rhslast
return mat, rhs
class InhomEquation:
def equation(self):
'''Mix-in class that returns matrix rows for inhomogeneity conditions'''
mat = np.zeros((self.nunknowns, self.model.neq,
self.model.npval), 'D')
rhs = np.zeros((self.nunknowns, self.model.ngvbc,
self.model.npval), 'D')
for icp in range(self.ncp):
istart = icp * 2 * self.nlayers
ieq = 0
for e in self.model.elementList:
if e.nunknowns > 0:
mat[istart: istart + self.nlayers,
ieq: ieq + e.nunknowns, :] = \
e.potinflayers(self.xc[icp], self.yc[icp],
self.layers, self.aqin) / \
self.aqin.T[self.layers][:, np.newaxis, np.newaxis] - \
e.potinflayers(self.xc[icp], self.yc[icp],
self.layers, self.aqout) / \
self.aqout.T[self.layers][:, np.newaxis, np.newaxis]
qxin, qyin = e.disinflayers(
self.xc[icp], self.yc[icp], self.layers, self.aqin)
qxout, qyout = e.disinflayers(
self.xc[icp], self.yc[icp], self.layers, self.aqout)
mat[istart + self.nlayers: istart + 2 * self.nlayers,
ieq: ieq + e.nunknowns, :] = \
(qxin - qxout) * np.cos(self.thetacp[icp]) + \
(qyin - qyout) * np.sin(self.thetacp[icp])
ieq += e.nunknowns
for i in range(self.model.ngbc):
rhs[istart: istart + self.nlayers, i, :] -= (
self.model.gbclist[i].unitpotentiallayers(
self.xc[icp], self.yc[icp], self.layers, self.aqin) /
self.aqin.T[self.layers][:, np.newaxis] -
self.model.gbclist[i].unitpotentiallayers(
self.xc[icp], self.yc[icp], self.layers, self.aqout) /
self.aqout.T[self.layers][:, np.newaxis])
qxin, qyin = self.model.gbclist[i].unitdischargelayers(
self.xc[icp], self.yc[icp], self.layers, self.aqin)
qxout,qyout = self.model.gbclist[i].unitdischargelayers(
self.xc[icp], self.yc[icp], self.layers, self.aqout)
rhs[istart + self.nlayers: istart + 2 * self.nlayers, i, :] -= \
(qxin - qxout) * np.cos(self.thetacp[icp]) + \
(qyin - qyout) * np.sin(self.thetacp[icp])
return mat, rhs
|
Celebration Of The 50th Anniversary Of Iconic British Figure James Bond And His Bond Girls. The Striking Gold Backdrops Of The Pieces Immediately Incite A Sense Of Glamour, Danger, Lust And Elegance; Crucial Themes Which Are Integral To The Bond Franchise And Which Have Secured The Character’s Adoration For Generations. Arina Orlova Skilfully Combines Elements Of Traditional Mythology And Iconography With Contemporary Popular Culture.
The Faceless Figures, Presented In Chronological Order, Are Instantly Identifiable As The Famous Bond Girls; The Image Of The Most Desirable Women From 1962 To 2012. The Girls Are Modern Day Mythological Icons, In A Society Where Film Is So Prominent Characters Often Become Idolised And Attract An Army Of “New Worshippers”. Arina States That “The Visual Language Of The Project Is Inspired By The Iconographic Traditions Of Byzantium And Ancient Russia Where Each Colour Had Its Own Value And Meaning”, For Example The Bond Girls Who Died Onscreen Are Presented In Red, The Colour Traditionally Used For Martyrs And Sacrifice.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# OpenBCI - framework for Brain-Computer Interfaces based on EEG signal
# Project was initiated by Magdalena Michalska and Krzysztof Kulewski
# as part of their MSc theses at the University of Warsaw.
# Copyright (C) 2008-2009 Krzysztof Kulewski and Magdalena Michalska
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
# Author:
# Mateusz Kruszyński <mateusz.kruszynski@gmail.com>
"""
This script show how to use PyML module.
"""
from PyML import *
def run():
data = VectorDataSet('iris.data', labelsColumn = -1)
#labels = data.labels.L
#some_pattern = data.getPattern(2)
#all_features_as_array = data.getMatrix()
#data.normalize()
#number_of_features = data.numFeatures
#number_of_trials = len(data)
#number_of_every_class = labels.classSize
data2 = data.__class__(data, classes = ['Iris-versicolor', 'Iris-virginica'])
s = SVM()
#r = s.cv(data2)
print(data2)
#r.plotROC()
#param = modelSelection.Param(svm.SVM(), 'C', [0.1, 1, 10, 100, 1000])
#m = modelSelection.ModelSelector(param, measure='balancedSuccessRate')
#m.train(data2)
#best_svm = m.classifier
#for i in range(len(data2)):
# print(best_svm.decisionFunc(data2, i), best_svm.classify(data2, i))
#best_svm_result = best_svm.cv(data2)
if __name__ == '__main__':
run()
|
The 1987 is undoubtedly a success for the vintage. It exhibits good cassis richness, a solid texture, and above average concentration and depth. The compact finish contains noticeable tannins. Anticipated maturity: Now-2000. Last tasted, 12/93. *** While this is undoubtedly a success for the vintage, among the first-growths I have a strong preference for Mouton-Rothschild, Lafite-Rothschild, and Haut-Brion. The 1987 Margaux exhibits a much more herbal note than one normally finds, but there is good richness, as well as a solid texture, suggesting concentration and depth. The wine is a bit narrow and compact in the finish, which leads me to believe that it will continue to evolve and open up. It should turn out to be nearly as good as the other so-called ``off'' years of Margaux during this decade, 1984 and 1980. Anticipated maturity: Now-2000. Last tasted, 1/91.
|
#!/usr/bin/env python
# -*- coding: utf8 -*-
"""
Damage Calculator for my character, Krag
Written by Christopher Durien Ward
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.
You should have received a copy of the GNU General Public License
along with this program. If not, see <http://www.gnu.org/licenses/>
"""
from dice_rolling import damage_roll, attack_roll
#For making text all colorful and easier to read.
class colorz:
PURPLE = '\033[95m'
BLUE = '\033[94m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
RED = '\033[91m'
GREY = '\033[90m'
CYAN = '\033[96m'
WHITE = '\033[97m'
ENDC = '\033[0m'
#################
# THROW BOULDER #
#################
def throw_boulder(char_stats):
total_damage = 0
boulder_attack_bonus = char_stats['BAB'] + char_stats['StrMod'] + char_stats['AttackSizeMod']
boulder_attack_bonus += char_stats['MoraleAttack']
#Range mod
distance = int(raw_input('\n\nHow far away is the target? (in feet) '))
if distance >= char_stats['BoulderRange'] * 5:
print "Target too far away"
return total_damage
range_penalty = 0
while distance >= char_stats['BoulderRange']:
distance -= char_stats['BoulderRange']
range_penalty += 1
#Attack roll
total_attack_roll, multiplier = attack_roll(char_stats, boulder_attack_bonus, range_penalty)
hit = raw_input('Did it hit? (y|n) ')
if hit.lower().startswith('n'):
return total_damage
#Damage roll
damage_mod = char_stats['StrMod'] + char_stats['MoraleDmg']
damage_dice = {
'num_of_dice': 2,
'num_of_sides': 8,
'total_mod': damage_mod,
'multiplier': multiplier
}
total_damage = damage_roll(char_stats, damage_dice)
return total_damage
|
decking boards price singapore Results 1 - 9 . Decking Product Post: Before Article: buy synthetic wood decking in india. Next Article: cheaper building homes in south africa · Composite fence wholesaler>decking boards price singapore..
composite decking installation cost singapore Results 1 - 9 of 9 . We meet every composite wood flooring and decking need & allows you to enjoy luxury of timber decking, vinyl flooring at affordable prices of Singapore. . Decking Installation By Huat Professional Engineering. We are professionals to the. decking prices per square foot singapore - Wood wpc balcony..
Nam Soon Decking Pte Ltd - Home | Facebook Nam Soon Decking Pte Ltd, Singapore, Singapore. 4312 likes 44 talking about this 1 was here. Nam Soon Decking Pte Ltd is the nationwide acclaimed..
The 25+ best Decking prices ideas on Pinterest | Composite decking . ikea decking tiles uk,how to assemble wooden floor,timber decking price for balcony singapore,. Trex Decking Prices | Average Trex Deck Cost Per Square Foot, Materials. Decking PricesTrex Decking CostDeck LightingLighting IdeasExterior LightingDeck CostBuild A DeckRooftop PatioHouse Paintings..
5 Types of Outdoor Decking Options in Singapore - The Floor Gallery May 9, 2016 . In Singapore, outdoor decking products are slowly gaining its popularity as new private homes and HDB flats are introducing the balcony and patio space areas for the home owners today. Therefore investing in a good outdoor decking material, whether it is a wood decking or composite decking product,..
Is Planter and Balcony Decking the New Trend in Singapore Today . Dec 3, 2011 . In Singapore, we have seen an increase in the number of home owners who did their balcony and planters with decking products such as Eco wood, natural wood and wood plastic composite. We are also beginning less and less of a simple looking tiled balcony or empty planter area. Home owners today..
Popular Balcony Decking Options in Singapore - Evorich Flooring Dec 12, 2012 . Popular Balcony Decking Options in Singapore . So, it is best to look for a credible contractor that is responsible for your balcony deck even after the sales. . However in terms of heat insulating properties, wood plastic composite decking will be relatively weaker than natural or Eco wood decking. In terms..
Hong Ye Eco Technologies: Home Safety Solution | Outdoor . Or drop us an email at sales@hongye.com.sg. Natural living environment that you can actually feel, first class quality that exceeds your expectation, precisely made to measure that ensures a perfect fit. This is what you can find only in Hong Ye Eco Technologies. Timber Decking, Floor Covering, Aluminum Cladding..
Nam Soon Timber: Outdoor Timber Decking in Singapore Nam Soon is proud to announce that we are the FIRST SINGAPOREAN TIMBER DECKING Company that received a PATENT for our timber deck installation system from Republic of Singapore. The Patent Act (Chapter 221), and Certificate Issued under Section 35. Title: A Timber Deck System, An assembly for installing the..
|
# ##### BEGIN GPL LICENSE BLOCK #####
#
# This program is free software; you can redistribute it and/or
# modify it under the terms of the GNU General Public License
# as published by the Free Software Foundation; either version 2
# of the License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write to the Free Software Foundation,
# Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
#
# ##### END GPL LICENSE BLOCK #####
# Project Name: MakeHuman
# Product Home Page: http://www.makehuman.org/
# Code Home Page: http://code.google.com/p/makehuman/
# Authors: Thomas Larsson
# Script copyright (C) MakeHuman Team 2001-2014
# Coding Standards: See http://www.makehuman.org/node/165
import bpy
from mathutils import Vector, Matrix
from bpy.props import *
from .utils import *
def updateScene():
bpy.ops.object.mode_set(mode='OBJECT')
bpy.ops.object.mode_set(mode='POSE')
def getPoseMatrix(gmat, pb):
restInv = pb.bone.matrix_local.inverted()
if pb.parent:
parInv = pb.parent.matrix.inverted()
parRest = pb.parent.bone.matrix_local
return restInv * (parRest * (parInv * gmat))
else:
return restInv * gmat
def getGlobalMatrix(mat, pb):
gmat = pb.bone.matrix_local * mat
if pb.parent:
parMat = pb.parent.matrix
parRest = pb.parent.bone.matrix_local
return parMat * (parRest.inverted() * gmat)
else:
return gmat
def matchPoseTranslation(pb, src):
pmat = getPoseMatrix(src.matrix, pb)
insertLocation(pb, pmat)
def matchPoseRotation(pb, src):
pmat = getPoseMatrix(src.matrix, pb)
insertRotation(pb, pmat)
def matchPoseTwist(pb, src):
pmat0 = src.matrix_basis
euler = pmat0.to_3x3().to_euler('YZX')
euler.z = 0
pmat = euler.to_matrix().to_4x4()
pmat.col[3] = pmat0.col[3]
insertRotation(pb, pmat)
def printMatrix(string,mat):
print(string)
for i in range(4):
print(" %.4g %.4g %.4g %.4g" % tuple(mat[i]))
def matchIkLeg(legIk, toeFk, mBall, mToe, mHeel):
rmat = toeFk.matrix.to_3x3()
tHead = Vector(toeFk.matrix.col[3][:3])
ty = rmat.col[1]
tail = tHead + ty * toeFk.bone.length
zBall = mBall.matrix.col[3][2]
zToe = mToe.matrix.col[3][2]
zHeel = mHeel.matrix.col[3][2]
x = Vector(rmat.col[0])
y = Vector(rmat.col[1])
z = Vector(rmat.col[2])
if zHeel > zBall and zHeel > zToe:
# 1. foot.ik is flat
if abs(y[2]) > abs(z[2]):
y = -z
y[2] = 0
else:
# 2. foot.ik starts at heel
hHead = Vector(mHeel.matrix.col[3][:3])
y = tail - hHead
y.normalize()
x -= x.dot(y)*y
x.normalize()
if abs(x[2]) < 0.7:
x[2] = 0
x.normalize()
z = x.cross(y)
head = tail - y * legIk.bone.length
# Create matrix
gmat = Matrix()
gmat.col[0][:3] = x
gmat.col[1][:3] = y
gmat.col[2][:3] = z
gmat.col[3][:3] = head
pmat = getPoseMatrix(gmat, legIk)
insertLocation(legIk, pmat)
insertRotation(legIk, pmat)
def matchPoleTarget(pb, above, below):
x = Vector(above.matrix.col[1][:3])
y = Vector(below.matrix.col[1][:3])
p0 = Vector(below.matrix.col[3][:3])
n = x.cross(y)
if abs(n.length) > 1e-4:
z = x - y
n.normalize()
z -= z.dot(n)*n
z.normalize()
p = p0 + 6*pb.length*z
else:
p = p0
gmat = Matrix.Translation(p)
pmat = getPoseMatrix(gmat, pb)
insertLocation(pb, pmat)
def matchPoseReverse(pb, src):
gmat = src.matrix
tail = gmat.col[3] + src.length * gmat.col[1]
rmat = Matrix((gmat.col[0], -gmat.col[1], -gmat.col[2], tail))
rmat.transpose()
pmat = getPoseMatrix(rmat, pb)
pb.matrix_basis = pmat
insertRotation(pb, pmat)
def matchPoseScale(pb, src):
pmat = getPoseMatrix(src.matrix, pb)
pb.scale = pmat.to_scale()
pb.keyframe_insert("scale", group=pb.name)
def snapFkArm(rig, snapIk, snapFk, frame):
(uparmFk, loarmFk, handFk) = snapFk
(uparmIk, loarmIk, elbow, elbowPt, handIk) = snapIk
matchPoseRotation(uparmFk, uparmIk)
matchPoseRotation(loarmFk, loarmIk)
matchPoseRotation(handFk, handIk)
def snapIkArm(rig, snapIk, snapFk, frame):
(uparmIk, loarmIk, elbow, elbowPt, handIk) = snapIk
(uparmFk, loarmFk, handFk) = snapFk
matchPoseTranslation(handIk, handFk)
matchPoseRotation(handIk, handFk)
updateScene()
matchPoleTarget(elbowPt, uparmFk, loarmFk)
#matchPoseRotation(uparmIk, uparmFk)
#matchPoseRotation(loarmIk, loarmFk)
def snapFkLeg(rig, snapIk, snapFk, frame, legIkToAnkle):
(uplegIk, lolegIk, kneePt, ankleIk, legIk, footRev, toeRev, mBall, mToe, mHeel) = snapIk
(uplegFk, lolegFk, footFk, toeFk) = snapFk
matchPoseRotation(uplegFk, uplegIk)
matchPoseRotation(lolegFk, lolegIk)
if not legIkToAnkle:
matchPoseReverse(footFk, footRev)
matchPoseReverse(toeFk, toeRev)
def snapIkLeg(rig, snapIk, snapFk, frame, legIkToAnkle):
(uplegIk, lolegIk, kneePt, ankleIk, legIk, footRev, toeRev, mBall, mToe, mHeel) = snapIk
(uplegFk, lolegFk, footFk, toeFk) = snapFk
if legIkToAnkle:
matchPoseTranslation(ankleIk, footFk)
else:
matchIkLeg(legIk, toeFk, mBall, mToe, mHeel)
matchPoseTwist(lolegIk, lolegFk)
updateScene()
matchPoseReverse(toeRev, toeFk)
updateScene()
matchPoseReverse(footRev, footFk)
updateScene()
matchPoleTarget(kneePt, uplegFk, lolegFk)
if not legIkToAnkle:
matchPoseTranslation(ankleIk, footFk)
SnapBonesAlpha8 = {
"Arm" : ["upper_arm", "forearm", "hand"],
"ArmFK" : ["upper_arm.fk", "forearm.fk", "hand.fk"],
"ArmIK" : ["upper_arm.ik", "forearm.ik", None, "elbow.pt.ik", "hand.ik"],
"Leg" : ["thigh", "shin", "foot", "toe"],
"LegFK" : ["thigh.fk", "shin.fk", "foot.fk", "toe.fk"],
"LegIK" : ["thigh.ik", "shin.ik", "knee.pt.ik", "ankle.ik", "foot.ik", "foot.rev", "toe.rev", "ball.marker", "toe.marker", "heel.marker"],
}
def getSnapBones(rig, key, suffix):
try:
rig.pose.bones["thigh.fk.L"]
names = SnapBonesAlpha8[key]
suffix = '.' + suffix[1:]
except KeyError:
names = None
if not names:
raise McpError("Not an mhx armature")
pbones = []
constraints = []
for name in names:
if name:
pb = rig.pose.bones[name+suffix]
pbones.append(pb)
for cns in pb.constraints:
if cns.type == 'LIMIT_ROTATION' and not cns.mute:
constraints.append(cns)
else:
pbones.append(None)
return tuple(pbones),constraints
def muteConstraints(constraints, value):
for cns in constraints:
cns.mute = value
def clearAnimation(rig, scn, act, type, snapBones):
from . import target
target.getTargetArmature(rig, scn)
ikBones = []
if scn.McpFkIkArms:
for bname in snapBones["Arm" + type]:
if bname is not None:
ikBones += [bname+".L", bname+".R"]
if scn.McpFkIkLegs:
for bname in snapBones["Leg" + type]:
if bname is not None:
ikBones += [bname+".L", bname+".R"]
ikFCurves = []
for fcu in act.fcurves:
words = fcu.data_path.split('"')
if (words[0] == "pose.bones[" and
words[1] in ikBones):
ikFCurves.append(fcu)
if ikFCurves == []:
raise MocapError("%s bones have no animation" % type)
for fcu in ikFCurves:
act.fcurves.remove(fcu)
def setMhxIk(rig, useArms, useLegs, turnOn):
if isMhxRig(rig):
ikLayers = []
fkLayers = []
if useArms:
rig["MhaArmIk_L"] = turnOn
rig["MhaArmIk_R"] = turnOn
ikLayers += [2,18]
fkLayers += [3,19]
if useLegs:
rig["MhaLegIk_L"] = turnOn
rig["MhaLegIk_R"] = turnOn
ikLayers += [4,20]
fkLayers += [5,21]
if turnOn:
first = ikLayers
second = fkLayers
else:
first = fkLayers
second = ikLayers
for n in first:
rig.data.layers[n] = True
for n in second:
rig.data.layers[n] = False
def transferMhxToFk(rig, scn):
from . import target
target.getTargetArmature(rig, scn)
lArmSnapIk,lArmCnsIk = getSnapBones(rig, "ArmIK", "_L")
lArmSnapFk,lArmCnsFk = getSnapBones(rig, "ArmFK", "_L")
rArmSnapIk,rArmCnsIk = getSnapBones(rig, "ArmIK", "_R")
rArmSnapFk,rArmCnsFk = getSnapBones(rig, "ArmFK", "_R")
lLegSnapIk,lLegCnsIk = getSnapBones(rig, "LegIK", "_L")
lLegSnapFk,lLegCnsFk = getSnapBones(rig, "LegFK", "_L")
rLegSnapIk,rLegCnsIk = getSnapBones(rig, "LegIK", "_R")
rLegSnapFk,rLegCnsFk = getSnapBones(rig, "LegFK", "_R")
#muteAllConstraints(rig, True)
oldLayers = list(rig.data.layers)
setMhxIk(rig, scn.McpFkIkArms, scn.McpFkIkLegs, True)
rig.data.layers = MhxLayers
lLegIkToAnkle = rig["MhaLegIkToAnkle_L"]
rLegIkToAnkle = rig["MhaLegIkToAnkle_R"]
frames = getActiveFramesBetweenMarkers(rig, scn)
nFrames = len(frames)
limbsBendPositive(rig, scn.McpFkIkArms, scn.McpFkIkLegs, frames)
for n,frame in enumerate(frames):
showProgress(n, frame, nFrames)
scn.frame_set(frame)
updateScene()
if scn.McpFkIkArms:
snapFkArm(rig, lArmSnapIk, lArmSnapFk, frame)
snapFkArm(rig, rArmSnapIk, rArmSnapFk, frame)
if scn.McpFkIkLegs:
snapFkLeg(rig, lLegSnapIk, lLegSnapFk, frame, lLegIkToAnkle)
snapFkLeg(rig, rLegSnapIk, rLegSnapFk, frame, rLegIkToAnkle)
rig.data.layers = oldLayers
setMhxIk(rig, scn.McpFkIkArms, scn.McpFkIkLegs, False)
setInterpolation(rig)
#muteAllConstraints(rig, False)
def transferMhxToIk(rig, scn):
from . import target
target.getTargetArmature(rig, scn)
lArmSnapIk,lArmCnsIk = getSnapBones(rig, "ArmIK", "_L")
lArmSnapFk,lArmCnsFk = getSnapBones(rig, "ArmFK", "_L")
rArmSnapIk,rArmCnsIk = getSnapBones(rig, "ArmIK", "_R")
rArmSnapFk,rArmCnsFk = getSnapBones(rig, "ArmFK", "_R")
lLegSnapIk,lLegCnsIk = getSnapBones(rig, "LegIK", "_L")
lLegSnapFk,lLegCnsFk = getSnapBones(rig, "LegFK", "_L")
rLegSnapIk,rLegCnsIk = getSnapBones(rig, "LegIK", "_R")
rLegSnapFk,rLegCnsFk = getSnapBones(rig, "LegFK", "_R")
#muteAllConstraints(rig, True)
oldLayers = list(rig.data.layers)
setMhxIk(rig, scn.McpFkIkArms, scn.McpFkIkLegs, False)
rig.data.layers = MhxLayers
lLegIkToAnkle = rig["MhaLegIkToAnkle_L"]
rLegIkToAnkle = rig["MhaLegIkToAnkle_R"]
frames = getActiveFramesBetweenMarkers(rig, scn)
#frames = range(scn.frame_start, scn.frame_end+1)
nFrames = len(frames)
for n,frame in enumerate(frames):
showProgress(n, frame, nFrames)
scn.frame_set(frame)
updateScene()
if scn.McpFkIkArms:
snapIkArm(rig, lArmSnapIk, lArmSnapFk, frame)
snapIkArm(rig, rArmSnapIk, rArmSnapFk, frame)
if scn.McpFkIkLegs:
snapIkLeg(rig, lLegSnapIk, lLegSnapFk, frame, lLegIkToAnkle)
snapIkLeg(rig, rLegSnapIk, rLegSnapFk, frame, rLegIkToAnkle)
rig.data.layers = oldLayers
setMhxIk(rig, scn.McpFkIkArms, scn.McpFkIkLegs, True)
setInterpolation(rig)
#muteAllConstraints(rig, False)
def muteAllConstraints(rig, value):
lArmSnapIk,lArmCnsIk = getSnapBones(rig, "ArmIK", "_L")
lArmSnapFk,lArmCnsFk = getSnapBones(rig, "ArmFK", "_L")
rArmSnapIk,rArmCnsIk = getSnapBones(rig, "ArmIK", "_R")
rArmSnapFk,rArmCnsFk = getSnapBones(rig, "ArmFK", "_R")
lLegSnapIk,lLegCnsIk = getSnapBones(rig, "LegIK", "_L")
lLegSnapFk,lLegCnsFk = getSnapBones(rig, "LegFK", "_L")
rLegSnapIk,rLegCnsIk = getSnapBones(rig, "LegIK", "_R")
rLegSnapFk,rLegCnsFk = getSnapBones(rig, "LegFK", "_R")
muteConstraints(lArmCnsIk, value)
muteConstraints(lArmCnsFk, value)
muteConstraints(rArmCnsIk, value)
muteConstraints(rArmCnsFk, value)
muteConstraints(lLegCnsIk, value)
muteConstraints(lLegCnsFk, value)
muteConstraints(rLegCnsIk, value)
muteConstraints(rLegCnsFk, value)
#------------------------------------------------------------------------
# Rigify
#------------------------------------------------------------------------
SnapBonesRigify = {
"Arm" : ["upper_arm", "forearm", "hand"],
"ArmFK" : ["upper_arm.fk", "forearm.fk", "hand.fk"],
"ArmIK" : ["hand_ik", "elbow_target.ik"],
"Leg" : ["thigh", "shin", "foot"],
"LegFK" : ["thigh.fk", "shin.fk", "foot.fk"],
"LegIK" : ["foot.ik", "foot_roll.ik", "knee_target.ik"],
}
def setLocation(bname, rig):
pb = rig.pose.bones[bname]
pb.keyframe_insert("location", group=pb.name)
def setRotation(bname, rig):
pb = rig.pose.bones[bname]
if pb.rotation_mode == 'QUATERNION':
pb.keyframe_insert("rotation_quaternion", group=pb.name)
else:
pb.keyframe_insert("rotation_euler", group=pb.name)
def setLocRot(bname, rig):
pb = rig.pose.bones[bname]
pb.keyframe_insert("location", group=pb.name)
pb = rig.pose.bones[bname]
if pb.rotation_mode == 'QUATERNION':
pb.keyframe_insert("rotation_quaternion", group=pb.name)
else:
pb.keyframe_insert("rotation_euler", group=pb.name)
def setRigifyFKIK(rig, value):
rig.pose.bones["hand.ik.L"]["ikfk_switch"] = value
rig.pose.bones["hand.ik.R"]["ikfk_switch"] = value
rig.pose.bones["foot.ik.L"]["ikfk_switch"] = value
rig.pose.bones["foot.ik.R"]["ikfk_switch"] = value
on = (value < 0.5)
for n in [6, 9, 12, 15]:
rig.data.layers[n] = on
for n in [7, 10, 13, 16]:
rig.data.layers[n] = not on
def transferRigifyToFk(rig, scn):
from rig_ui import fk2ik_arm, fk2ik_leg
frames = getActiveFramesBetweenMarkers(rig, scn)
nFrames = len(frames)
for n,frame in enumerate(frames):
showProgress(n, frame, nFrames)
scn.frame_set(frame)
updateScene()
if scn.McpFkIkArms:
for suffix in [".L", ".R"]:
uarm = "upper_arm.fk"+suffix
farm = "forearm.fk"+suffix
hand = "hand.fk"+suffix
uarmi = "MCH-upper_arm.ik"+suffix
farmi = "MCH-forearm.ik"+suffix
handi = "hand.ik"+suffix
fk = [uarm,farm,hand]
ik = [uarmi,farmi,handi]
fk2ik_arm(rig, fk, ik)
setRotation(uarm, rig)
setRotation(farm, rig)
setRotation(hand, rig)
if scn.McpFkIkLegs:
for suffix in [".L", ".R"]:
thigh = "thigh.fk"+suffix
shin = "shin.fk"+suffix
foot = "foot.fk"+suffix
mfoot = "MCH-foot"+suffix
thighi = "MCH-thigh.ik"+suffix
shini = "MCH-shin.ik"+suffix
footi = "foot.ik"+suffix
mfooti = "MCH-foot"+suffix+".001"
fk = [thigh,shin,foot,mfoot]
ik = [thighi,shini,footi,mfooti]
fk2ik_leg(rig, fk, ik)
setRotation(thigh, rig)
setRotation(shin, rig)
setRotation(foot, rig)
setInterpolation(rig)
for suffix in [".L", ".R"]:
if scn.McpFkIkArms:
rig.pose.bones["hand.ik"+suffix]["ikfk_switch"] = 0.0
if scn.McpFkIkLegs:
rig.pose.bones["foot.ik"+suffix]["ikfk_switch"] = 0.0
def transferRigifyToIk(rig, scn):
from rig_ui import ik2fk_arm, ik2fk_leg
frames = getActiveFramesBetweenMarkers(rig, scn)
nFrames = len(frames)
for n,frame in enumerate(frames):
showProgress(n, frame, nFrames)
scn.frame_set(frame)
updateScene()
if scn.McpFkIkArms:
for suffix in [".L", ".R"]:
uarm = "upper_arm.fk"+suffix
farm = "forearm.fk"+suffix
hand = "hand.fk"+suffix
uarmi = "MCH-upper_arm.ik"+suffix
farmi = "MCH-forearm.ik"+suffix
handi = "hand.ik"+suffix
pole = "elbow_target.ik"+suffix
fk = [uarm,farm,hand]
ik = [uarmi,farmi,handi,pole]
ik2fk_arm(rig, fk, ik)
setLocation(pole, rig)
setLocRot(handi, rig)
if scn.McpFkIkLegs:
for suffix in [".L", ".R"]:
thigh = "thigh.fk"+suffix
shin = "shin.fk"+suffix
foot = "foot.fk"+suffix
mfoot = "MCH-foot"+suffix
thighi = "MCH-thigh.ik"+suffix
shini = "MCH-shin.ik"+suffix
footi = "foot.ik"+suffix
footroll = "foot_roll.ik"+suffix
pole = "knee_target.ik"+suffix
mfooti = "MCH-foot"+suffix+".001"
fk = [thigh,shin,foot,mfoot]
ik = [thighi,shini,footi,footroll,pole,mfooti]
ik2fk_leg(rig, fk, ik)
setLocation(pole, rig)
setLocRot(footi, rig)
setRotation(footroll, rig)
setInterpolation(rig)
for suffix in [".L", ".R"]:
if scn.McpFkIkArms:
rig.pose.bones["hand.ik"+suffix]["ikfk_switch"] = 1.0
if scn.McpFkIkLegs:
rig.pose.bones["foot.ik"+suffix]["ikfk_switch"] = 1.0
#-------------------------------------------------------------
# Limbs bend positive
#-------------------------------------------------------------
def limbsBendPositive(rig, doElbows, doKnees, frames):
limbs = {}
if doElbows:
pb = getTrgBone("forearm.L", rig)
minimizeFCurve(pb, rig, 0, frames)
pb = getTrgBone("forearm.R", rig)
minimizeFCurve(pb, rig, 0, frames)
if doKnees:
pb = getTrgBone("shin.L", rig)
minimizeFCurve(pb, rig, 0, frames)
pb = getTrgBone("shin.R", rig)
minimizeFCurve(pb, rig, 0, frames)
def minimizeFCurve(pb, rig, index, frames):
fcu = findBoneFCurve(pb, rig, index)
if fcu is None:
return
y0 = fcu.evaluate(0)
t0 = frames[0]
t1 = frames[-1]
for kp in fcu.keyframe_points:
t = kp.co[0]
if t >= t0 and t <= t1:
y = kp.co[1]
if y < y0:
kp.co[1] = y0
class VIEW3D_OT_McpLimbsBendPositiveButton(bpy.types.Operator):
bl_idname = "mcp.limbs_bend_positive"
bl_label = "Bend Limbs Positive"
bl_description = "Ensure that limbs' X rotation is positive."
bl_options = {'UNDO'}
def execute(self, context):
from .target import getTargetArmature
scn = context.scene
rig = context.object
try:
layers = list(rig.data.layers)
getTargetArmature(rig, scn)
frames = getActiveFramesBetweenMarkers(rig, scn)
limbsBendPositive(rig, scn.McpBendElbows, scn.McpBendKnees, frames)
rig.data.layers = layers
print("Limbs bent positive")
except MocapError:
bpy.ops.mcp.error('INVOKE_DEFAULT')
return{'FINISHED'}
#------------------------------------------------------------------------
# Buttons
#------------------------------------------------------------------------
class VIEW3D_OT_TransferToFkButton(bpy.types.Operator):
bl_idname = "mcp.transfer_to_fk"
bl_label = "Transfer IK => FK"
bl_description = "Transfer IK animation to FK bones"
bl_options = {'UNDO'}
def execute(self, context):
use_global_undo = context.user_preferences.edit.use_global_undo
context.user_preferences.edit.use_global_undo = False
try:
startProgress("Transfer to FK")
rig = context.object
scn = context.scene
if isMhxRig(rig):
transferMhxToFk(rig, scn)
elif isRigify(rig):
transferRigifyToFk(rig, scn)
else:
raise MocapError("Can not transfer to FK with this rig")
endProgress("Transfer to FK completed")
except MocapError:
bpy.ops.mcp.error('INVOKE_DEFAULT')
finally:
context.user_preferences.edit.use_global_undo = use_global_undo
return{'FINISHED'}
class VIEW3D_OT_TransferToIkButton(bpy.types.Operator):
bl_idname = "mcp.transfer_to_ik"
bl_label = "Transfer FK => IK"
bl_description = "Transfer FK animation to IK bones"
bl_options = {'UNDO'}
def execute(self, context):
use_global_undo = context.user_preferences.edit.use_global_undo
context.user_preferences.edit.use_global_undo = False
try:
startProgress("Transfer to IK")
rig = context.object
scn = context.scene
if isMhxRig(rig):
transferMhxToIk(rig, scn)
elif isRigify(rig):
transferRigifyToIk(rig, scn)
else:
raise MocapError("Can not transfer to IK with this rig")
endProgress("Transfer to IK completed")
except MocapError:
bpy.ops.mcp.error('INVOKE_DEFAULT')
finally:
context.user_preferences.edit.use_global_undo = use_global_undo
return{'FINISHED'}
class VIEW3D_OT_ClearAnimationButton(bpy.types.Operator):
bl_idname = "mcp.clear_animation"
bl_label = "Clear Animation"
bl_description = "Clear Animation For FK or IK Bones"
bl_options = {'UNDO'}
type = StringProperty()
def execute(self, context):
use_global_undo = context.user_preferences.edit.use_global_undo
context.user_preferences.edit.use_global_undo = False
try:
startProgress("Clear animation")
rig = context.object
scn = context.scene
if not rig.animation_data:
raise MocapError("Rig has no animation data")
act = rig.animation_data.action
if not act:
raise MocapError("Rig has no action")
if isMhxRig(rig):
clearAnimation(rig, scn, act, self.type, SnapBonesAlpha8)
setMhxIk(rig, scn.McpFkIkArms, scn.McpFkIkLegs, (self.type=="FK"))
elif isRigify(rig):
clearAnimation(rig, scn, act, self.type, SnapBonesRigify)
else:
raise MocapError("Can not clear %s animation with this rig" % self.type)
endProgress("Animation cleared")
except MocapError:
bpy.ops.mcp.error('INVOKE_DEFAULT')
finally:
context.user_preferences.edit.use_global_undo = use_global_undo
return{'FINISHED'}
#------------------------------------------------------------------------
# Debug
#------------------------------------------------------------------------
def printHand(context):
rig = context.object
'''
handFk = rig.pose.bones["hand.fk.L"]
handIk = rig.pose.bones["hand.ik.L"]
print(handFk)
print(handFk.matrix)
print(handIk)
print(handIk.matrix)
'''
footIk = rig.pose.bones["foot.ik.L"]
print(footIk)
print(footIk.matrix)
class VIEW3D_OT_PrintHandsButton(bpy.types.Operator):
bl_idname = "mcp.print_hands"
bl_label = "Print Hands"
bl_options = {'UNDO'}
def execute(self, context):
printHand(context)
return{'FINISHED'}
|
The Canadian dollar strengthened against its U.S. counterpart on Friday, as bets for an interest-rate cut by the Bank of Canada this year were slashed after domestic data showed a spike in jobs that surprised investors.
Employers added 55,900 jobs in February, which was the third month of outsized gains in the last four and exceeded the 20,000 jobs created in the United States for the same month. Analysts had forecast February job numbers to be flat in Canada.
“It was a great report card for the Canadian jobs market and it flies in the face of some of the other statistics that we’ve been seeing lately out of Canada,” said Scott Smith, managing partner at Viewpoint Investment Partners.
Data one week ago showed that Canada’s economy barely expanded in the fourth quarter.
Chances of an interest-rate cut by December, which had climbed this week on a more dovish tone from the Bank of Canada, fell to less than 20 per cent from about 40 per cent before the jobs data, the overnight index swaps market indicated.
“I think if you look at this big picture it is an argument for the Bank of Canada to remain on the sidelines in the near term, rather than one for them to consider eases,” said Andrew Kelvin, senior rates strategist at TD Securities.
The Bank of Canada’s benchmark interest rate is at 1.75 per cent.
At 4:04 p.m., the Canadian dollar was trading 0.4 per cent higher at 1.3405 to the greenback, or 74.60 U.S. cents. The currency, which touched its weakest in more than two months at 1.3467 on Thursday, traded in a range of 1.3391 to 1.3466.
For the week, the loonie fell 0.8 per cent.
Gains for the loonie on Friday came despite separate data showing that Canadian housing starts tumbled about 16 per cent in February.
Also, the price of oil, one of Canada’s major exports, was pressured by signs of a slowing global economy. U.S. crude oil futures settled 1 per cent lower at $56.07 a barrel.
Speculators have raised their bearish bets on the Canadian dollar, data from the U.S. Commodity Futures Trading Commission and Reuters calculations showed. As of March 5, net short positions had increased to 40,444 contracts from 39,177 in the prior week.
Canadian government bond prices were mixed across a flatter yield curve, with the two-year price down 5 cents to yield 1.651 per cent.
|
import zipfile
import StringIO
import tempfile
import shutil
import os
from impactlib.load import load_repo_data
from impactlib.refresh import strip_extra
from impactlib.github import GitHub
from impactlib.semver import SemanticVersion
from impactlib import config
try:
import colorama
from colorama import Fore, Back, Style
colorama.init()
use_color = True
except:
use_color = False
def get_package(pkg):
repo_data = load_repo_data()
if not pkg in repo_data:
msg = "No package named '"+pkg+"' found"
if use_color:
print Fore.RED+msg
else:
print msg
return None
return repo_data[pkg]
def latest_version(versions):
if len(versions)==0:
return None
keys = versions.keys()
svs = map(lambda x: (SemanticVersion(x, tolerant=True), x), keys)
sorted_versions = sorted(svs, cmp=lambda x, y: x[0]>y[0])
print "sorted_versions = "+str(sorted_versions)
return sorted_versions[0][1]
def install_version(pkg, version, github, dryrun, verbose):
repo_data = load_repo_data()
pdata = get_package(pkg)
if pdata==None:
return
versions = pdata["versions"]
vdata = None
for ver in versions:
if ver==version:
vdata = versions[ver]
if vdata==None:
msg = "No version '"+str(version)+"' found for package '"+str(pkg)+"'"
if use_color:
print Fore.RED+msg
else:
print msg
return
zipurl = vdata["zipball_url"]
vpath = vdata["path"]
if verbose:
print " URL: "+zipurl
if not dryrun:
zfp = StringIO.StringIO(github.getDownload(zipurl).read())
zf = zipfile.ZipFile(zfp)
root = zf.infolist()[0].filename
dst = os.path.join(".", str(pkg)+" "+str(strip_extra(version)))
if os.path.exists(dst):
print " Directory "+dst+" already exists, skipping"
else:
td = tempfile.mkdtemp()
zf.extractall(td)
src = os.path.join(td, root, vpath)
if verbose:
print " Root zip directory: "+root
print " Temp directory: "+str(td)
print " Version path: "+str(vpath)
print " Source: "+str(src)
print " Destination: "+str(dst)
shutil.copytree(src,dst)
shutil.rmtree(td)
def elaborate_dependencies(pkgname, version, current):
repo_data = load_repo_data()
if not pkgname in repo_data:
print " No information for package "+pkgname+", skipping"
return current
if not version in repo_data[pkgname]["versions"]:
print " No version "+version+" of package "+pkgname+" found, skipping"
return current
ret = current.copy()
ret[pkgname] = version
vdata = repo_data[pkgname]["versions"][version]
deps = vdata["dependencies"]
for dep in deps:
dname = dep["name"]
dver = dep["version"]
if dname in ret:
if dver==ret[dname]:
# This could avoid circular dependencies?
continue
else:
raise NameError("Dependency on version %s and %s of %s" % \
(ret[dname], dver, dname))
subs = elaborate_dependencies(dname, dver, ret)
for sub in subs:
if sub in ret:
if subs[sub]==ret[sub]:
continue
else:
raise NameError("Dependency on version %s and %s of %s" % \
(sub[sub], ret[sub], sub))
ret[sub] = subs[sub]
return ret
def install(pkgname, verbose, dry_run):
username = config.get("Impact", "username", None)
password = config.get("Impact", "password", None)
token = config.get("Impact", "token", None)
if "#" in pkgname:
pkg_data = pkgname.split("#")
else:
pkg_data = pkgname.split(" ")
if len(pkg_data)==1:
pkg = pkg_data[0]
version = None
elif len(pkg_data)==2:
pkg = pkg_data[0]
version = pkg_data[1]
else:
raise ValueError("Package name must be of the form name[#version]")
pdata = get_package(pkg)
if pdata==None:
return
version = version
if version==None:
version = latest_version(pdata["versions"])
if verbose:
print " Choosing latest version: "+version
if version==None:
msg = "No (semantic) versions found for package '"+pkg+"'"
if use_color:
print Fore.RED+msg
else:
print msg
return
msg = "Installing version '"+version+"' of package '"+pkg+"'"
if use_color:
print Fore.GREEN+msg
else:
print msg
# Setup connection to github
github = GitHub(username=username, password=password,
token=token)
pkgversions = elaborate_dependencies(pkg, version, current={})
if verbose:
print "Libraries to install:"
for pkgname in pkgversions:
print " "+pkgname+" version "+pkgversions[pkgname]
print "Installation..."
for pkgname in pkgversions:
install_version(pkgname, pkgversions[pkgname], github,
dryrun=dry_run, verbose=verbose)
|
Protect yourself a little better!
Top rated security for 24 hours unattended operations.
Eliminate long lines and reduce labour costs.
Eliminate tension at check out area during peak hours.
A team, driven by passion and 30 years of experience.
We provide innovative industry-specific hardware for security, automation and self-service.
|
import sys
import socket
import errno
import re
import xmltodict
import logging
if sys.version_info >= (3, 0):
from urllib import request
elif sys.version_info < (3, 0):
import urllib as request
class Service(object):
def __init__(self, data):
self.service_type = data['serviceType']
self.service_id = data['serviceId']
self.scpd_url = data['SCPDURL']
self.control_url = data['controlURL']
self.event_url = data['eventSubURL']
class Device(object):
def __init__(self, url, data):
doc = xmltodict.parse(data)
device = doc['root']['device']
service = device['serviceList']['service']
try:
self.friendly_name = device['friendlyName']
self.manufacturer = device['manufacturer']
self.model_name = device['modelName']
self.model_description = device['modelDescription']
except: pass
self.url_base = url
self.services = []
if(type(service) is list):
for s in service: self.services.append(Service(s))
else: self.services.append(Service(service))
def get_service(self, type):
for s in self.services:
if(s.service_type == type): return s
return None
def has_service(self, type):
if(self.get_service(type) == None): return False
else: return True
def get_base_url(path):
try:
m = re.match('https?\:\/\/([a-zA-Z0-9.:]+)\/', path)
return m.group(1)
except:
return None
def get_device(res):
url = res.location
baseurl = get_base_url(url)
if not baseurl: return None
try:
con = request.urlopen(url)
return Device(baseurl, con.read())
except OSError as err: pass
def get_devices(resources):
result = []
for r in resources:
dev = get_device(r)
if dev: result.append(dev)
return result
def filter_devices_by_service_type(devices, type):
result = []
for d in devices:
if(d.has_service(type)): result.append(d)
return result
|
The 14th Amendment states that "all persons born or naturalized in the United States and subject to the jurisdiction thereof, are citizens of the United States and of the State wherein they reside".
House Speaker Paul Ryan and Senate Judiciary Committee Chairman Chuck Grassley pushed back, saying the change should be made through Congress, not by executive fiat. "I'm concerned about any president trying to rewrite the constitution by themselves", King said. Critics of birthright citizenship say that the amendment was never meant to apply to children of two non-citizens - and particularly those who have come to the country illegally.
Trump stated his disapproval of the current system that allows children born to noncitizen parents while on American soil to be granted citizenship. A longtime proponent of comprehensive immigration reform, Graham would likely want to tie any effort to end birthright citizenship to major changes in immigration policy that could benefit illegal immigrants already living in-country, perhaps with a proposed "path to citizenship", or a long-term partial amnesty agreement. But according to many constitutional scholars, Trump doesn't have that right-as it would be a blatant violation of the 14th Amendment to the U.S. Constitution. Responding to critics on Wednesday, he said that birthright citizenship will be abolished "one way or another".
Children born to unauthorized immigrants in the USA under the current interpretation of immigration law gain access to USA benefits.
President Donald Trump plans to abolish the right to citizenship for anyone born in the United States - guaranteed by the 14th Amendment to the US Constitution - with an executive order, he said in an interview excerpt released Tuesday.
The president also tweeted about Democrat Harry Reid, who in 1993 called for the end of the birthright policy. And 84 years later in its 1982 ruling in Plyler v. Doe, the Supreme Court ruled that even if one enters the US illegally, they were within USA jurisdiction-which means that any of their USA -born children enjoy 14th Amendment protections.
In another Wednesday tweet, the president acknowledged that if he does sign an order, the Supreme Court would inevitably have to settle a subsequent court battle. "Oh, yeah? Everybody born here is a citizen, so says the Constitution".
Another Democratic leader Nancy Pelosi slammed Mr. Trump for his move.
"Illegals can be prosecuted, illegals can go to court and sue". I think the President is looking at executive action.
|
# -*- coding: UTF-8 -*-
#/*
# * Copyright (C) 2013 Maros Ondrasek
# *
# *
# * This Program is free software; you can redistribute it and/or modify
# * it under the terms of the GNU General Public License as published by
# * the Free Software Foundation; either version 2, or (at your option)
# * any later version.
# *
# * This Program is distributed in the hope that it will be useful,
# * but WITHOUT ANY WARRANTY; without even the implied warranty of
# * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# * GNU General Public License for more details.
# *
# * You should have received a copy of the GNU General Public License
# * along with this program; see the file COPYING. If not, write to
# * the Free Software Foundation, 675 Mass Ave, Cambridge, MA 02139, USA.
# * http://www.gnu.org/copyleft/gpl.html
# *
# */
import os
sys.path.append( os.path.join ( os.path.dirname(__file__),'resources','lib') )
import joj
import xbmcprovider,xbmcaddon,xbmcutil,xbmc
import util
import traceback,urllib2
__scriptid__ = 'plugin.video.joj.sk'
__scriptname__ = 'joj.sk'
__addon__ = xbmcaddon.Addon(id=__scriptid__)
__language__ = __addon__.getLocalizedString
settings = {'downloads':__addon__.getSetting('downloads'),'quality':__addon__.getSetting('quality')}
params = util.params()
if params=={}:
xbmcutil.init_usage_reporting(__scriptid__)
provider = joj.JojContentProvider()
class XBMCJojContentProvider(xbmcprovider.XBMCMultiResolverContentProvider):
def render_default(self, item):
if item['type'] == 'showoff':
item['title'] = item['title'] + ' [B](Nevys)[/B]'
elif item['type'] == "showon7d":
item['title'] = item['title'] + ' [B][COLOR red](7d)[/COLOR][/B]'
if item['type'] == 'topvideo' or item['type'] == 'newvideo':
self.render_video(item)
else:
self.render_dir(item)
XBMCJojContentProvider(provider,settings,__addon__).run(params)
|
On Halloween Eve in 1938, a flood of terror swept the United States. Some people, believing that the world was coming to an end, tried flight or suicide, or just cringed in their homes as “aliens” from Mars attacked New Jersey, then New York and the world. But it was just a prank, tapping a deep national well of pre-war anxiety, and produced for radio by Orson Welles and his Mercury Players.
Times have changed so radically since then that, in the face of real disasters like the Three Mile Island “partial meltdown” in 1979, the explosion and fire at Chernobyl in 1986, or the 2011 earthquake and Tsunami-sparked disaster in Japan, people are deceptively calm.
Are we really so confident about our ability to cope and recover, or have we given in to an overarching pessimism about the future of the planet and fate of humanity?
According to a survey by the Encyclopedia Britannica, in 1980 nearly half of all US junior high school students believed that World War III would begin by the year 2000. If you consider the last decade, it looks like the youth of that period – in their 40s today – were only off by one year.
Many futurologists, an academic specialty that emerged about 40 years ago, continue to warn that the environment is critically damaged. Yet this sounds positively cautious when compared to the diverse images of social calamity projected through films, books and the news media. There have always been such predictions, but in the last few decades they have proliferated almost as rapidly as nuclear weapons during a Cold War. Some dramatize a “big bang” theory –global devastation caused by some extinction level event.
Fortunately, a few do chart a slightly hopeful future, one in which humanity either smartens up in time to save itself or manages to survive.
Rather than a desire to be scared out of our wits, the attraction to such stories and predictions may reflect a widespread interest in confronting the likely future. The mass media may, in fact, be producing training guides for the coming Dark Age — if we’re lucky.
Sometimes humanity – or California – is saved in the nick of time by an individual sacrifice or collective action. Sometimes, as in the classics On the Beach, Dr. Strangelove or The Omega Man (remade as I am Legend), we are basically wiped out. Occasionally there are long-term possibilities for survival, but technology breaks down and the environment takes strange revenge. In some cases the future is so dismal that it is hardly worth going on, as in Cormac McCarthy’s The Road.
In a few cases the end of humanity is just a piece of cosmic black humor.
All of these are speculative visions, many adapted from ideas originally developed in pulp science fiction or from prophetic statements by figures like Edgar Cayce. The films usually offer a way out (audiences generally favor hopeful endings), while deep doom and gloom tend to gain more traction in print. But both scenarios share the assumption that the track we are on leads to a dangerous dead end.
We seem to keep asking the same basic questions: How do we get to the apocalypse? And what happens afterward? One obvious way to get pretty close is to misuse technology, especially when the mistakes are made as a result of greed – for power, knowledge or cold cash.
The classic anti-nuclear film The China Syndrome presents a textbook example: greedy corporations ignoring public health and shoddy construction in pursuit of profit. It was a powerful statement in its day, especially given the Three Mile accident just weeks after the film’s release, yet predictable in a way and inconclusive on the prospects for health or quality survival in a nuclear-powered world. We are just beginning to have this discussion again.
An earlier “close call” film, The Andromeda Strain, had a more inventive story and placed the blame on a lust for knowledge (the old Frankenstein theme). But this early techno-thriller provided no real solution to the problem of disease or disaster created by scientific discovery. In Michael Crichton’s Andromeda Strain the threat was a deadly organism brought back from outer space, the same kind of self-inflicted biological warfare that heavy doses of radioactive fallout can become. But in the book and film the blood of victims coagulated almost instantly, avoiding the prolonged agony of dying from a plague or the long-term effects of radiation.
Fear of nuclear power is by no means new. Radiation created many movie monsters in the 1950s, from the incredible 50-foot man and woman to giant mantises, crabs and spiders. But the threat was usually related to the testing or detonation of weapons, not the ongoing use of what was then called “the peaceful atom.” That mythical atom was going to be our good friend in a cheap, safe, long-term relationship.
Since then, and especially since the nuclear accidents of the 1970s and 80s, nuclear plants have provided a basis for various bleak scenarios. Not even Vermont has been spared, though it sometimes appears as a post-disaster oasis. In the 1970s novel The Orange R, however, Middlebury College teacher John Clagett extended nuclear terror into a future where the Green Mountains is inhabited by radioactive people called Roberts. They are dying off rapidly in a country where apartheid has become a device to keep the Roberts away from the Normals.
In The Orange R Normal people who live in radioactive areas wear airtight suits and laugh hysterically when anyone mentions solar power. All of Vermont’s major streams and bodies of water have heated up, and the deer have mutated into killer Wolverdeer. Still, the book offers a hopeful vision at the end: the Roberts rise up and take over Vermont’s nukes and successfully dismantle the Nuclear Regulatory Commission, as well as a corporate state that is only vaguely described. Most Vermonters have terminal radiation sickness, but for humanity it turns out to be another close call.
There are simply too many novels about the end of the current civilization, too many to list and perhaps too many for our psychological health. It could become a self-fulfilling prophecy.
Only a few decades ago people who accepted the prophecies of Nostradamus or Edgar Cayce were mocked by mainstream society and even some of their close friends. Cayce predicted that the western part of the US would be broken up, that most of Japan would be covered by water, and that New York would be destroyed in 1998 (perhaps he meant Mayor Giuliani’s remake of Times Square). Nearly 400 years earlier Nostradamus, whose benefactor was Henry II of France, said that western civilization would be under heavy attack from the East in 1999, with possible cataclysmic repercussions. Not far off, it turns out.
But what is “lunatic fringe” in one era can become mainstream, perhaps even commercially viable, in another.
The destruction of the West Coast has been featured in numerous books and movies. Hollywood has of course excelled in creating doomsday myths, from the antichrist’s continuing saga in countless unmemorable installments, to total destruction in the Planet of the Apes franchise, The Day After Tomorrow, 2012 and many more.
“Racked by earthquakes and volcanoes, Japan is slowly sinking into the sea. A race against time and tide begins as Americans and Japanese work together to salvage some fraction of the disappearing Japan.” Close, but they missed the nuclear angle.
Predictions to the contrary, Stanley Kubrick’s Dr. Strangelove remains one of the most memorable doomsday movies. Its black humor and naturalistic performances by Peter Sellers, George C. Scott and Sterling Hayden combine with a devastating premise – that The End may come through a mixture of human error (a demented general) and flawed technology (an extinction level bomb that can’t be disarmed).
There haven’t been many stories based on Nostradamus’ Eastern siege prophecy, although there certainly could be. But a number of films have adapted Cayce’s visions of environmental upheaval. Oddly enough Charlton Heston appears in several, usually as Cassandra or savior. In Planet of the Apes he is an astronaut who returns to Earth only to find his civilization in ruins, apes in charge, and humans living below ground as scarred mutants who worship the bomb. In The Omega Man he is a disillusioned scientist who has survived bio-chemical war and spends his days exterminating book-burning mutants. He discovers an antidote to the plague, but only a handful of people are left to give humanity another chance.
And then there is Soylent Green, a film that presents the slow road to environmental pollution and starvation. This time Heston is a policeman who eventually discovers that the masses have been hoodwinked into cannibalism. They are also so depressed that suicide parlors are big business.
Most of the Heston vehicles were big budget B-movies, exploiting popular anxiety but much less affecting than Dr. Strangelove or Nevil Shute’s On the Beach. On the other hand, they deftly tapped into growing doubts about the future with a Dirty Harry-style response.
Ecologist George Stewart wrote his novel Earth Abides in 1949, before the Atom bomb scare took hold or the environment seemed like something to worry about. But his story of civilization destroyed by an airborne disease took the idea of rebuilding afterward about as far as anyone. In this prescient book the breakdown of man-made systems is traced in convincing detail, in counterpoint with a story of survival without machines, mass production and, ultimately, most of what residents of developed countries take for granted.
Not many recent books or films are as optimistic about our prospects once humanity has gone through either its Big Bang or Long Wheeze end game. In Margaret Atwood’s recent two-volume science fiction saga, for example, man-made environmental catastrophe and mass extinction in Oryx and Crake is followed, in The Year of the Flood, by marginal survival in a strange mutated world.
The optimism of Earth Abides about the ability of human beings to adapt may be a reason why it did not develop the cult following of more dystopian tales. The more dismal the forecast, it seems, the more enthusiastic the following. Apropos, one of the most popular science fiction books downloaded last year was The Passage, Justin Cronin’s compelling mixture of vampires run amuck, government conspiracy, and post-apocalypse survivalism.
What most of these stories and films have in common is a basic idea: the inevitability of radical, cataclysmic change. Should we manage to get beyond annihilation, apocalypse, Armageddon or whatever, they predict that we are very likely to enter a new Dark Age. Like most things, this too isn’t a new idea. At the end of his life J. B. Priestley, the British novelist who founded the Campaign for Nuclear Disarmament, contemplated such a future. Calling it a “slithering down” he forecast that industrial civilization would one day come to an end.
But even in a Dark Age there is some hope. The life of the planet will likely continue and equilibrium can be reestablished in time. At least many of us continue to hope so. If the devastation is not total, perhaps a new culture can emerge. The main question thus becomes not whether the Earth will survive but how human beings fit in.
Near the end of his life H. G. Wells, the master of science fiction who produced optimistic visions in The Shape of Things to Come and The Time Machine, turned pessimist and wrote Mind at the End of Its Tether. “There is no way out or round or through,” he concluded. Life on Earth may not be ending, Wells believed, but humans aren’t going anywhere.
Compared with that forecast, tales of a new Dark Age start to sound more hopeful.
Greg Guma lives in Vermont. His new sci-fi novel, Dons of Time, was released in October.
|
# Copyright 2012 Nebula, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from django.utils.translation import ugettext_lazy as _
import horizon_lib
class SystemPanels(horizon_lib.PanelGroup):
slug = "admin"
name = _("System")
panels = ('overview', 'metering', 'hypervisors', 'aggregates',
'instances', 'volumes', 'flavors', 'images',
'networks', 'routers', 'defaults', 'info')
class Admin(horizon_lib.Dashboard):
name = _("Admin")
slug = "admin"
panels = (SystemPanels,)
default_panel = 'overview'
permissions = ('openstack.roles.admin',)
horizon_lib.register(Admin)
|
In addition to these famous sites, the Four Bar Cottages may also be available for birding with 24 hour notice. Let us know if you would like bird there and we will try to make arrangements with Bill and Jill.
200+ bird species have been recorded at the Four Bar Cottages. Some stand-out species are zone-tailed hawk, gray hawk, rose-throated becard, and Bendire's thrasher. Some eastern species that have been recorded here are worm-eating warbler, American redstart, hooded warbler, and red-eyed vireo.
"The Four Bar cattle brand was registered by Benton Strickland of nearby Animas, NM, in pre-statehood days (both AZ and NM became states in 1912). While most brands today are assigned with their registry number in the tens of thousands, the Four Bar is number 971. It is one of the oldest brands in the area. It is still in use today by Bill and Jill Cavaliere, owners of the Four Bar Cottages, and Jill is a grand daughter of Benton Strickland."
|
import pytest
from unittest.mock import patch
import open_cp.scripted.processors as processors
import open_cp.scripted.evaluators as evaluators
from .. import helpers
import io
@pytest.fixture
def outfile():
with io.StringIO() as f:
yield f
@pytest.fixture
def hit_rate_save(outfile):
hrs = processors.HitRateSave(outfile, [10,15,20,100])
hrs.init()
return hrs
def test_HitRateSave_header(hit_rate_save, outfile):
hit_rate_save.done()
assert outfile.getvalue().strip() == "Predictor,Start time,End time,10%,15%,20%,100%"
def test_HitRateSave_header_filename():
capture = helpers.StrIOWrapper()
with patch("builtins.open", helpers.MockOpen(capture)):
hrs = processors.HitRateSave("out.csv", [10, 20])
hrs.init()
hrs.done()
assert capture.data.strip() == "Predictor,Start time,End time,10%,20%"
def test_HitRateSave(hit_rate_save, outfile):
hit_rate_save.process("predname", evaluators.HitRateEvaluator(), [{10:12, 15:20, 20:100, 100:100}], [("absa", "ahjsdjh")])
hit_rate_save.process("dave", 6, None, None)
hit_rate_save.done()
rows = [x.strip() for x in outfile.getvalue().split("\n")]
assert rows[0] == "Predictor,Start time,End time,10%,15%,20%,100%"
assert rows[1] == "predname,absa,ahjsdjh,12,20,100,100"
@pytest.fixture
def hit_count_save(outfile):
hcs = processors.HitCountSave(outfile, [10,15,20,100])
hcs.init()
return hcs
def test_HitCountSave_header(hit_count_save, outfile):
hit_count_save.done()
assert outfile.getvalue().strip() == "Predictor,Start time,End time,Number events,10%,15%,20%,100%"
def test_HitCountSave(hit_count_save, outfile):
hit_count_save.process("pn", evaluators.HitCountEvaluator(), [{10:(5,12), 15:(6,12), 20:(8,12), 100:(12,12)}], [("absa", "ahjsdjh")])
hit_count_save.process("dave", 6, None, None)
hit_count_save.done()
rows = [x.strip() for x in outfile.getvalue().split("\n")]
assert rows[1] == "pn,absa,ahjsdjh,12,5,6,8,12"
|
Publish by in Category decorating ideas at November 1st, 2018. Tagged with 30 slide in double oven electric range. best slide in double oven electric range. double oven electric range slide in white. ge slide in double oven electric range. kenmore slide in double oven electric range. lg double oven electric range slide in. slide in double oven electric range. whirlpool slide in double oven electric range.
Double Oven Electric Range Slide In have 40 picture of decorating ideas, it's including Double Oven Electric Range Slide In Improbable Ranges The Home Decorating Ideas 1. Double Oven Electric Range Slide In Awe Inspiring LG SIGNATURE 7 3 Cu Ft Self Cleaning Decorating Ideas 2. Double Oven Electric Range Slide In Stupefy Amazon Com SAMSUNG NE58H9950WS 31 Inch Decorating Ideas 3. Double Oven Electric Range Slide In Doubtful Ranges The Home Depot Decorating Ideas 4. Double Oven Electric Range Slide In Cool LG 7 3 Cu Ft Self Clean Smart Wi Fi Decorating Ideas 5.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.