text
stringlengths 29
850k
|
|---|
from ztag.annotation import *
import re
class Helix(Annotation):
protocol = protocols.HTTP
subprotocol = protocols.HTTP.GET
port = None
version_re = re.compile(
"^Helix Mobile Server/(\d+(\.\d+)*)",
re.IGNORECASE
)
os_re = re.compile(
"^Helix Mobile Server/(?:\d+(?:\.\d+)*) \(.*\)",
re.IGNORECASE
)
def process(self, obj, meta):
server = obj["headers"]["server"]
if server.startswith("Helix Mobile Server"):
meta.local_metadata.product = "Helix Mobile Server"
version = self.version_re.search(server).group(1)
meta.local_metadata.version = version
os = self.os_re.search(server).group(1)
if "win" in os:
meta.global_metadata.os = OperatingSystem.WINDOWS
elif "rhel4" in os:
meta.global_metadata.os = OperatingSystem.REDHAT
meta.global_metadata.os_version = "4"
elif "rhel5" in os:
meta.global_metadata.os = OperatingSystem.REDHAT
meta.global_metadata.os_version = "5"
elif "rhel6" in os:
meta.global_metadata.os = OperatingSystem.REDHAT
meta.global_metadata.os_version = "6"
|
Part of the World AIDS Day 2017 national advocacy campaign, UNAIDS Joint Team and Tabah Foundation in collaboration with the National AIDS Program organized a one-day high-level event in Kempinski Nile Hotel. The event comes to highlight religious leaders’ views on stigma and discrimination faced by people living with HIV.
The event was attended by representatives from the Ministry of Health and Population, UN Joint Team, Al Habeeb Ali Al Jifri and Bishop Bolous Sorour in addition to representatives of Tabah Foundation and selected media personnel.
The event included participation of key religious leaders and activists in the field of HIV and AIDS.
Globally the First of December is marked as the World AIDS Day around. The commemoration comes as a part of a national advocacy campaign designed to raise awareness about HIV, enhance prevention efforts and move the various sectors to address the stigma and discrimination that surround people living with HIV in Egypt.
|
#!/usr/bin/env python
# coding: utf-8
# Copyright 2019 The Crashpad Authors. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import collections
import os
import subprocess
import sys
MigInterface = collections.namedtuple('MigInterface', ['user_c', 'server_c',
'user_h', 'server_h'])
def generate_interface(defs, interface, includes=[], sdk=None, clang_path=None,
mig_path=None, migcom_path=None):
if mig_path is None:
mig_path = 'mig'
command = [mig_path,
'-user', interface.user_c,
'-server', interface.server_c,
'-header', interface.user_h,
'-sheader', interface.server_h,
]
if clang_path is not None:
os.environ['MIGCC'] = clang_path
if migcom_path is not None:
os.environ['MIGCOM'] = migcom_path
if sdk is not None:
command.extend(['-isysroot', sdk])
for include in includes:
command.extend(['-I' + include])
command.append(defs)
subprocess.check_call(command)
def parse_args(args):
parser = argparse.ArgumentParser()
parser.add_argument('--clang-path', help='Path to Clang')
parser.add_argument('--mig-path', help='Path to mig')
parser.add_argument('--migcom-path', help='Path to migcom')
parser.add_argument('--sdk', help='Path to SDK')
parser.add_argument('--include',
default=[],
action='append',
help='Additional include directory')
parser.add_argument('defs')
parser.add_argument('user_c')
parser.add_argument('server_c')
parser.add_argument('user_h')
parser.add_argument('server_h')
return parser.parse_args(args)
def main(args):
parsed = parse_args(args)
interface = MigInterface(parsed.user_c, parsed.server_c,
parsed.user_h, parsed.server_h)
generate_interface(parsed.defs, interface, parsed.include,
parsed.sdk, parsed.clang_path, parsed.mig_path,
parsed.migcom_path)
if __name__ == '__main__':
sys.exit(main(sys.argv[1:]))
|
JAC has long-standing editorial relationships with a network of top-tier print, broadcast and digital media outlets. We continually craft unique, intriguing and impactful brand stories that journalists want to pursue and readers find engaging. The results: high-profile media campaigns that raise your brand awareness in your key markets.
At the heart of all we do is your brand identity. If your current identity isn’t accurately reflecting who you are and where you want to be in consumers’ minds, we work with you to redefine the visual and verbal elements that comprise it.
Because celebrities can provide such a powerful halo effect for your brand, JAC has developed close connections with boldface names in entertainment, fashion, design, media and business. We can tap this network to add a little star power to your events and marketing programs.
When it comes to making a big statement in the marketplace, oftentimes two brands are better than one. JAC connects you with leading charitable, arts, events and media brands to create stand-out strategic alliances and event sponsorships that enhance your brand’s awareness and image.
Friend, follow, link, comment, upload—from Facebook to Pinterest, JAC is adept in all the ways your brand can connect with your consumers through social media. We approach building your digital presence strategically with the goal of achieving strong user engagement. We also provide continual analysis of consumer interactions to optimize results.
JAC creates brand events that resonate with attendees and generate extensive media coverage. We leverage our connections with high-profile venues, strategic partners and VIPs to host a range of red-letter engagements including film premieres, fashion shows, pop-up shops, trade show displays and more.
|
import pandas as pd
import numpy as np
def get_targets(p, year):
# years 2010 and 2011 and 2012 don't have ISIN, boooo
pcols = p.columns.values
ttypes_col = deets[year]["summary"]["ttypes"]
targets = p[p["Organisation"].notnull()][["Organisation",pcols[ttypes_col]]]
targets.rename(columns = {pcols[ttypes_col]: "target type"}, inplace=True)
targets["year"] = year - 1
targets["has absolute"] = targets["target type"].apply(lambda x: "solute" in unicode(x).encode('utf-8'))
targets["has intensity"] = targets["target type"].apply(lambda x: "ntensity" in unicode(x).encode('utf-8'))
return targets
def get_vcounts(p, year):
pcols = p.columns.values
return p[pcols[goalcols[2014]]].value_counts()
def summary(vcounts, p):
# stats about emissions targets in 2014
# generate for every year
hasintensity = vcounts['Intensity target'] + 350
hasabs = vcounts['Absolute target'] + 350
neg = len(p) - vcounts.values.sum() + vcounts['No']
return {"total":len(p),
"neg":neg,
"intensity":hasintensity,
"absolute":hasabs}
def get_companies_by_target(p):
# get by levels[0] should be ISIN
companies = p.index.levels[0].tolist()
pieces_targets = []
pieces_none = []
for c in companies:
try:
f = p.loc[c]
fhas_target = f[f["has target"]]
f["ISIN"] = c
yearswithtarget = fhas_target.index.tolist()
if len(yearswithtarget) > 2:
pieces_targets.append(f)
else:
pieces_none.append(f)
except Exception:
print c
pass
ptargets = pd.concat(pieces_targets).reset_index().set_index(["ISIN", "year"])
pnotargets = pd.concat(pieces_none).reset_index().set_index(["ISIN", "year"])
return ptargets, pnotargets
def get_hadtarget(targetorgs):
# shift had target
# targetorgs index is ["Organisation", "year"]
to_gs = targetorgs.groupby(level=0)
companies = to_gs.indices.keys()
pieces = []
for c in companies:
g = to_gs.get_group(c)
g_tseries = np.array(g["has target"].tolist())
g_aseries = np.array(g["has absolute"].tolist())
g_iseries = np.array(g["has intensity"].tolist())
g_tseries = g_tseries[:-1]
g_aseries = g_aseries[:-1]
g_iseries = g_iseries[:-1]
g = g[1:]
g['had target last year'] = g_tseries
g['had absolute last year'] = g_aseries
g['had intensity last year'] = g_iseries
# g["ISIN"] = c
pieces.append(g)
new_to = pd.concat(pieces).reset_index().set_index("Organisation")
return new_to
# targetorgs is the join of the table of targets,
# Scope 1 and 2 emissions, and orginfos
def get_targetorgs(to):
to = to.reset_index().set_index("ISIN")
to = to[['year','Country', 'GICS Sector',
'GICS Industry Group', 'GICS Industry',
'cogs', 'revt',
'has target', 'has absolute',
'has intensity',
'Scope 1', 'Scope 2',
'1and2 total', '1and2 intensity',
'percent change 1and2 intensity',
'percent change 1and2 total']]
return to
goalcols = { 2014: 14, 2013: 14, 2012: 12, 2011: 12, 2010: 12 }
# target details
deets = { 2014: { 'summary': { 'sheet': 12, 'ttypes': 14 },
'abs info':
{ 'sheet': 13, 'scope': 15, 'target': 17,
'base year': 18, 'base ghg': 19,
'target year': 20},
'int info':
{ 'sheet': 14, 'scope': 15, 'target': 17,
'metric': 18,
'base year': 19, 'base ghg int': 20,
'target year': 21},
'progress':
{ 'sheet': 16, 'target id': 14},
'initiatives':
{ 'sheet': 18, 'itype': 14,
'monetary savings': 17, 'monetary cost': 18 }
},
2013: { 'summary': { 'sheet': 10, 'ttypes': 14 },
'abs info':
{ 'sheet': 11, 'scope': 15, 'target': 17,
'base year': 18, 'base ghg': 19,
'target year': 20 },
'int info':
{ 'sheet': 12, 'scope': 15, 'target': 17,
'metric': 18,
'base year': 19, 'base ghg int': 20,
'target year': 21},
'progress': { 'sheet': 14 },
'initiatives': { 'sheet': 16, 'itype': 14 }
},
2012: { 'summary': { 'sheet': 10, 'ttypes': 12 },
'abs info':
{ 'sheet': 11, 'scope': 13, 'target': 15,
'base year': 16, 'base ghg': 17,
'target year': 18 },
'int info':
{ 'sheet': 12, 'scope': 13, 'target': 15,
'metric': 16, 'base year': 17, 'base ghg int': 18,
'target year': 19 },
'progress': { 'sheet': 14 },
'initiatives': { 'sheet': 16, 'itype': 12 }
},
2011: { 'summary': { 'sheet': 9, 'ttypes': 12 },
'abs info': {},
'int info': {},
'progress': {},
'initiatives': {}
},
2010: { 'summary': { 'sheet': 23, 'ttypes': 12 },
'abs info': {},
'int info': {},
'progress': {},
'initiatives': {}
}
}
# scopes need cleaning...
|
Signage can be mundane and sometimes that’s OK. Clever signage design though can deliver so much more to enhance usability and enjoyment of public spaces, education grounds and sports facilities. 2MH Consulting works with talented designers and innovative signage production facilities to provide signage options you may never have considered. We consistently receive enthusiastic feedback from clients about the quality and impact of our signage solutions.
Users of your outdoor space or sporting facility need to be able to find what they need easily. That involves the right signage in the right place. It requires intuitive directional signage that users understand at a glance. And at a practical level it requires signage materials that will continue to be easy to find and read for years to come, not just look pretty for the next few months then fade or peel. We deliver creative signage solutions that communicate and that last.
What sports facility is complete without a scoreboard? Whatever the sporting code, indoor or outdoor, electronic or manual, 2MH Consulting has seen every scoreboard request you can think of. We know how to design for usability, longevity, player visibility and spectator amenity.
Interpretative signage can lift a public space from a space to an experience. It provides a broader context to the space for users to explore and appreciate. It engages users. It increases the impact of a project and yet it does not add significantly to the cost. 2MH Consulting has received accolades for interpretative signage on projects highlighting history of the site, botanical features of the landscaping, animals native to the era, aboriginal heritage of the region and big picture directional signage showing distance to surrounding cities near and far. What’s needed is creativity, foresight and upfront planning to deliver interpretative signage as an integral part of the overall project, not an afterthought. We can guide you through this process and we know how to deliver stunning results.
|
from django.views.generic import ListView, DetailView, CreateView, \
DeleteView, UpdateView, \
ArchiveIndexView, DateDetailView, \
DayArchiveView, MonthArchiveView, \
TodayArchiveView, WeekArchiveView, \
YearArchiveView
from teacher.models import Teacher_loan
class Teacher_loanView(object):
model = Teacher_loan
def get_template_names(self):
"""Nest templates within teacher_loan directory."""
tpl = super(Teacher_loanView, self).get_template_names()[0]
app = self.model._meta.app_label
mdl = 'teacher_loan'
#self.template_name = tpl.replace(app, '{0}/{1}'.format(app, mdl))
self.template_name = tpl[:8]+'teacher_loan/'+tpl[8:]
return [self.template_name]
class Teacher_loanDateView(Teacher_loanView):
date_field = 'timestamp'
month_format = '%m'
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanBaseListView(Teacher_loanView):
paginate_by = 10
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanArchiveIndexView(
Teacher_loanDateView, Teacher_loanBaseListView, ArchiveIndexView):
pass
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanCreateView(Teacher_loanView, CreateView):
pass
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanDateDetailView(Teacher_loanDateView, DateDetailView):
pass
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanDayArchiveView(
Teacher_loanDateView, Teacher_loanBaseListView, DayArchiveView):
pass
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanDeleteView(Teacher_loanView, DeleteView):
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanDetailView(Teacher_loanView, DetailView):
pass
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanListView(Teacher_loanBaseListView, ListView):
pass
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanMonthArchiveView(
Teacher_loanDateView, Teacher_loanBaseListView, MonthArchiveView):
pass
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanTodayArchiveView(
Teacher_loanDateView, Teacher_loanBaseListView, TodayArchiveView):
pass
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanUpdateView(Teacher_loanView, UpdateView):
pass
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanWeekArchiveView(
Teacher_loanDateView, Teacher_loanBaseListView, WeekArchiveView):
pass
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
class Teacher_loanYearArchiveView(
Teacher_loanDateView, Teacher_loanBaseListView, YearArchiveView):
make_object_list = True
def get_success_url(self):
from django.core.urlresolvers import reverse
return reverse('teacher_teacher_loan_list')
|
The Uniden DECT4086 is a 2 line cordless phone with digital answering system that utilizes DECT technology with a frequency range of 1.9 GHz for maximum voice security and the clearest voice reception. The DECT4086 features digital answering system with handset access and 30 minutes recording time.
The Uniden DECT4086 has a large, easy-to-read LCD screen which clearly displays the date, time, extension and is also compatible with the caller-ID and call-waiting display. Other features include 100 phone book locations and 50-number caller ID history, 3-party conferencing, message waiting indicator and base duplex speakerphone. This system is expandable up to 10 handsets.
|
from __future__ import division
def trsq(Lg, delta, xc, f, g):
# import necessary functions
from numpy import dot, inf
from numpy.linalg import norm
# our own functions
from newton import newton
# Define function to minimise
fun = lambda x: f + dot(x,g) + (Lg / 2) * norm(x) ** 2
# n.b. x -> x - xc for convenience
# Case a) Trust-region inactive
# Quadratic order approx. solution
xq1 = -g / Lg
# Check if q. ord. root is within trust region
if (norm(xq1) < delta):
bndq1 = fun(xq1)
xbq1 = xq1
else: # No solution
bndq1 = inf
xbq1 = inf
# Case b) Trust-region active
# Initial perturbation
l = -Lg + 1e-5
# Define nfq(l) to find quadratic approx. roots
def nfq(l):
# Find x(l)
xl = -g / (l + Lg)
# Calculate |xl|-delta (for newton stopping rule)
xlmd = norm(xl) - delta
# Calculate f(l) for p=-1
fl = 1/norm(xl) - 1/delta
# Find x'(l)
xlp = g / ((l + Lg) ** 2)
# Calculate f'(l) for p=-1
flp = -dot(xl,xlp) * (dot(xl,xl) ** (-1.5))
# Calculate increment
dl = fl / flp
# Set Delta
Delta = delta
return xlmd, dl, Delta
# Run newton
l = newton(nfq, l)
# Given l, find xq2
xq2 = -g / (l + Lg)
bndq2 = fun(xq2)
xbq2 = xq2
# Return minimum of bndq1 and bndq2
if (bndq1 < bndq2):
bnd = bndq1
xb = xbq1 + xc# since x -> x - xc
else:
bnd = bndq2
xb = xbq2 + xc# since x -> x - xc
return bnd, xb
|
be21zh.org, bring China abreast 21 century二一新纪: leading with respect.
hurt by Chinese air pollution. ^ since last Sunday dining out with son, my throat hurt by well-known Chinese smoky air all over the country and now sneeze, too. in dawn dreamed of living in campus or QRRS dorm. my bed in a corridor. my once QRRS colleague, also the best man of my first civil wedding, WangChangqing, also lived there. then dreamed in family I tried too many times to release hot water for some usage from heat pipe, and broke the inside plastic tube. Its urgent otherwise the heat water will run cross the house, so I hasted to ask my 3rd sister's attention for help. now its a bright morning, I sign-on QRRS check-in system, ate breakfast in dorm canteen. my son, warrenzh 朱楚甲, glad to chat with me last night online, likely for I help him find a long time missing video game. God, bring me my girl LV, Asoh Yukiko, girl Zhou, my Taiwan girl sooner aside me, bring out our prosperous offspring in time. thx God dad!
dreamed of hometown, Wuxue. ^ in dawn dreamed first in Zhudajiu, my passed dad's village, where disgusting toilet again harassed me. when I strongly reluctantly entered it and poo, a middle aged doctor and 2 young ladies hovering around me and continuous chatting with me. all of them r villagers there. then dreamed in the town, Wuxue's bus station, those lazy and loosing ticket sale women peeking into my purse. and I had to frequently dug my purse and anxious about thieves there. Its likely the first work day QRRS, my once and long time employer, and a state-owned company, adopted card check-in system. I arranged ring in the night and It woke me up this morning. so far I succeeded sign-on in the morning crowd and now returned to dorm to open a new day in front of my notebook. God, dad, please show me sooner my Royal China, bring sooner my other children in heaven now. please let my son, warrenzh 朱楚甲, enjoy the life and cozy of Internet as I do. that's my prayer in this smoky morning after shallow snow.
dreamed working together with son. ^ in dawn dreamed I worked with my son on a legacy system. I tried hard and completed the missing function of the old application. so my son and I was enrolled by the company. then my previous workmate called in informing QRRS, my long time employer adopted card check-in system, so hope I can sign on twice a day with the employee card. I was idle so I visited old office and filed to director for a desktop. now I surfing via corporate lan on my notebook. God, I looking forward bliss in the sunny day, grant me opener workspace step by step. God, dad, thx for recent good time with my son, with my workload. bring me sooner my Royal China to allow our glamorous task on the planet, in corner of world by China mainland.
dreamed of my Japanese girls. ^ the dorm is warm. in dawn dreamed 2 Japanese girls in my life. one is the actor, Jutani Nami, from a Taiwan episode "爱无限" I deeply touched, the another is Asoh Yukiko, my Crowned Queen. I managed harmony between them when we dwell each together. our parents also appeared before our wedding ceremony. Asoh more self-preserved and I in dream more trying appealing to her. we also attending birth school. we had good time in love. God, time of life passing, where is our family life with my Royal China? yesterday is first day of son, warrenzh 朱楚甲's winter vacation. son more or less anxious about his school performance which so far less impressive. God, all bliss is over his living on the earth.
dreamed of being a boiler man. ^ in dawn dreamed life of an elder boiler man. he tried to make friend of 2 girls, a Japanese girl, a Chinese girl, in campus, first by inviting the Chinese girl help him washing his clothes with reward. then found his wife and child ages missing. then closely witness the Japanese girl's life: her mother, her classmates, etc. later I visit her school with red wine and shared with her. likely I felt in love with her.
these days I busy with son, warrenzh 朱楚甲's new sites, www.woz.fm , designed it a logo, updated family sites with new sidebar &footer to include new member sites link. Chinese censorship delayed my operation heavily, but thanks God, it done. and also with my son made proud progress in our video game. God, u see the prize of my joy on the root of the planet. bring me sooner my Royal China, God dad!
dreamed of my company, Dragon Horse. ^ dreamed I worked first for a company and pivoted a project with my smartness even not brightest. then build a company of my own for ignorance of the company I worked for. the company name is 骥, or Dragon Horse. Its first product is the rebuild of the project I previously contributed to. then the old larger company competed and tried to occupy our land by crushed with machine and cultivated our borderland. then dream my company worked on high technology I now didn't recall, but its vivid and lengthy in dawn dream. I only remembered I worked hard and enjoy it, within my company. yesterday I visit my son in the afternoon. we played video games and I taught my son about team works when he too haste to edge me out in the shooting game. when I returned to QRRS dorms, I penniless except some changes for bus. I tried to borrow a meal in a nearby restaurant where I frequented and it loaned me several times, but this time the girl casher definitely denied. however, I managed to eat a dinner loaned by another small restaurant. God, today I likely had to live with only a meal, or even worse, for my son's mom said when she cursed my visit her house will be empty today, in aim to evade me. God, bring me sooner my Royal China to home me and my sons. God, thx Dad. in this draining Chinese holiday season in PRC, sinking IS not me but the floating and hardly wrecked nation, PRC. God, save me from drift in the chill driving scattered Chinese, toward save or seize of death. God, sure is the sunny morning outside. God, blessing my 2013 and its 1st day today.
|
# opennms.py vi:ts=4:sw=4:expandtab:
#
# Support functions for plugins that deal with OpenNMS.
# Author: Landon Fuller <landonf@threerings.net>
#
# Copyright (c) 2007 Three Rings Design, Inc.
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
# 1. Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# 2. Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# 3. Neither the name of the copyright owner nor the names of contributors
# may be used to endorse or promote products derived from this software
# without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
import os, tempfile, logging, stat
import splat
from splat import plugin
from pysqlite2 import dbapi2 as sqlite
try:
# Python 2.5 cElementTree
from xml.etree import cElementTree as ElementTree
except ImportError:
# Stand-alone pre-2.5 cElementTree
import cElementTree as ElementTree
# Logger
logger = logging.getLogger(splat.LOG_NAME)
# Output File Encoding
XML_ENCODING = "UTF-8"
# XML Namespaces
XML_USERS_NAMESPACE = "http://xmlns.opennms.org/xsd/users"
XML_GROUPS_NAMESPACE = "http://xmlns.opennms.org/xsd/groups"
# OpenNMS User Record Fields
OU_USERNAME = 'userName'
OU_FULLNAME = 'fullName'
OU_COMMENTS = 'comments'
OU_EMAIL = 'email'
OU_PAGER_EMAIL = 'pagerEmail'
OU_XMPP_ADDRESS = 'xmppAddress'
OU_NUMERIC_PAGER = 'numericPager'
OU_NUMERIC_PAGER_SERVICE = 'numericPagerService'
OU_TEXT_PAGER = 'textPager'
OU_TEXT_PAGER_SERVICE = 'textPagerService'
OU_LDAP_DN = 'ldapDN'
# OpenNMS Group Record Fields
OG_GROUPNAME = 'groupName'
class UserExistsException (plugin.SplatPluginError):
pass
class NoSuchUserException (plugin.SplatPluginError):
pass
class GroupExistsException (plugin.SplatPluginError):
pass
class NoSuchGroupException (plugin.SplatPluginError):
pass
def _sqlite_dict_factory(cursor, row):
"""
Returns sqlite rows as dictionaries
"""
d = {}
for idx, col in enumerate(cursor.description):
d[col[0]] = row[idx]
return d
class WriterContext(object):
def __init__(self):
# A map of (XML/database) fields to LDAP attributes
# The name choices are no accident -- they're meant
# to match between the DB and the XML.
self.attrmap = {
OU_USERNAME : None,
OU_FULLNAME : None,
OU_COMMENTS : None,
OU_EMAIL : None,
OU_PAGER_EMAIL : None,
OU_XMPP_ADDRESS : None,
OU_NUMERIC_PAGER : None,
OU_NUMERIC_PAGER_SERVICE : None,
OU_TEXT_PAGER : None,
OU_TEXT_PAGER_SERVICE : None
}
# Map of configuration keys to the attribute map
self.config_attrmap = {}
for key in self.attrmap.iterkeys():
self.config_attrmap[key.lower() + "attribute"] = key
self.usersFile = None
self.groupsFile = None
self.opennmsGroup = None
class Writer(plugin.Helper):
@classmethod
def attributes(self):
# We want all attributes
return None
@classmethod
def parseOptions(self, options):
context = WriterContext()
for key in options.iterkeys():
# Do some magic to check for 'attribute keys' without enumerating
# them all over again.
if (key.endswith("attribute")):
try:
attrKey = context.config_attrmap[key]
except KeyError:
raise plugin.SplatPluginError, "Invalid option '%s' specified." % key
if (context.attrmap.has_key(attrKey)):
context.attrmap[attrKey] = options[key]
continue
if (key == "usersfile"):
context.usersFile = options[key]
continue
if (key == "groupsfile"):
context.groupsFile = options[key]
continue
if (key == "opennmsgroup"):
context.opennmsGroup = options[key]
continue
raise plugin.SplatPluginError, "Invalid option '%s' specified." % key
if (context.attrmap[OU_USERNAME] == None):
raise plugin.SplatPluginError, "Missing userNameAttribute option."
if (context.usersFile == None):
raise plugin.SplatPluginError, "Missing usersFile option."
if (context.groupsFile == None):
raise plugin.SplatPluginError, "Missing groupsFile option."
return context
def __init__ (self):
# If a fatal error occurs, set this to True, and we won't attempt to
# overwrite any files in finish()
self.fatalError = False
# Path to the user/group 'database' xml files
self.usersFile = None
self.groupsFile = None
# Create a temporary database in which to store user records
dbfile = None
try:
(handle, dbfile) = tempfile.mkstemp()
self._initdb(dbfile)
except Exception, e:
if (dbfile != None and os.path.exists(dbfile)):
os.unlink(dbfile)
raise plugin.SplatPluginError("Initialization failure: %s" % e)
def _initdb (self, dbfile):
"""
Create our temporary user record database
"""
# Connect to the database
self.db = sqlite.connect(dbfile)
# Initialize the users table
self.db.execute(
"""
CREATE TABLE Users (
userName TEXT NOT NULL PRIMARY KEY,
ldapDN TEXT NOT NULL,
fullName TEXT DEFAULT NULL,
comments TEXT DEFAULT NULL,
email TEXT DEFAULT NULL,
pagerEmail TEXT DEFAULT NULL,
xmppAddress TEXT DEFAULT NULL,
numericPager TEXT DEFAULT NULL,
numericPagerService TEXT DEFAULT NULL,
textPager TEXT DEFAULT NULL,
textPagerService TEXT DEFAULT NULL
);
"""
)
# Now for the group table
self.db.execute(
"""
CREATE TABLE Groups (
groupName TEXT NOT NULL PRIMARY KEY,
comments TEXT DEFAULT NULL
);
"""
)
# ... finally, the group member table
self.db.execute(
"""
CREATE TABLE GroupMembers (
groupName TEXT NOT NULL,
userName TEXT NOT NULL,
PRIMARY KEY(groupName, username),
FOREIGN KEY(groupName) REFERENCES Groups(groupName)
FOREIGN KEY(userName) REFERENCES Users(userName)
);
"""
)
# Drop the file out from under ourselves
os.unlink(dbfile)
# Commit our changes
self.db.commit()
def _insertDict(self, table, dataDict):
"""
Safely insert a dict into a table (with SQL escaping)
"""
def dictValuePad(key):
return '?'
cols = []
vals = []
for key in dataDict.iterkeys():
cols.append(key)
vals.append(dataDict[key])
sql = 'INSERT INTO ' + table
sql += ' ('
sql += ', '.join(cols)
sql += ') VALUES ('
sql += ', '.join(map(dictValuePad, vals))
sql += ');'
self.db.execute(sql, vals)
def _createUserAttributeDict (self, ldapEntry, attrMap):
"""
Add to dict from attribute dictionary
"""
result = {}
# Add required elements
result[OU_USERNAME] = ldapEntry.attributes[attrMap[OU_USERNAME]][0]
result[OU_LDAP_DN] = ldapEntry.dn
# Add optional elements
for key in attrMap.iterkeys():
ldapKey = attrMap[key]
if (ldapEntry.attributes.has_key(ldapKey)):
result[key] = ldapEntry.attributes[ldapKey][0]
return result
def _insertUserRecord (self, context, ldapEntry):
# Validate the available attributes
attributes = ldapEntry.attributes
if (not attributes.has_key(context.attrmap[OU_USERNAME])):
raise plugin.SplatPluginError, "Required attribute %s not found for dn %s." % (context.attrmap[OU_USERNAME], ldapEntry.dn)
# Insert the user record in the database
insertData = self._createUserAttributeDict(ldapEntry, context.attrmap)
try:
self._insertDict("Users", insertData)
self.db.commit()
except Exception, e:
self.fatalError = True
raise plugin.SplatPluginError, "Failed to commit user record to database for dn %s: %s" % (ldapEntry.dn, e)
def _insertGroupRecord (self, context, ldapEntry):
insertData = {
'groupName' : context.opennmsGroup
}
# Attempt to insert the group record
try:
self._insertDict("Groups", insertData)
self.db.commit()
except sqlite.IntegrityError, e:
# We'll get an IntegrityError if the record already exists:
# No need to add it.
self.db.rollback()
except Exception, e:
self.fatalError = True
raise plugin.SplatPluginError, "Failed to commit group record to database for dn: %s" % (ldapEntry.dn, e)
# Insert the group membership record
insertData = {
'groupName' : context.opennmsGroup,
'userName' : ldapEntry.attributes[context.attrmap[OU_USERNAME]][0]
}
try:
self._insertDict("GroupMembers", insertData)
self.db.commit()
except Exception, e:
self.fatalError = True
raise plugin.SplatPluginError, "Failed to commit group membership record to database for dn: %s" (ldapEntry.dn, e)
def work (self, context, ldapEntry, modified):
# We need to pull the location of the user file out of the first configuration
# context we get.
if (self.usersFile == None):
self.usersFile = context.usersFile
self.groupsFile = context.groupsFile
else:
# Is the setting still the same? It's not overridable.
if (self.usersFile != context.usersFile):
self.fatalError = True
raise plugin.SplatPluginError, "The \"usersFile\" setting may not be overridden in a group configuration"
if (self.groupsFile != context.groupsFile):
self.fatalError = True
raise plugin.SplatPluginError, "The \"groupsFile\" setting may not be overridden in a group configuration"
# Insert the user record
self._insertUserRecord(context, ldapEntry)
# Insert the group record
if (context.opennmsGroup != None):
self._insertGroupRecord(context, ldapEntry)
def _writeXML (self, etree, filePath):
# Write out the new XML file. mkstemp()-created files are
# "readable and writable only by the creating user ID", so we'll use that,
# and then reset permissions to match the original file.
# Open the temporary file
try:
outputDir = os.path.dirname(filePath)
(tempFd, tempPath) = tempfile.mkstemp(dir=outputDir)
except Exception, e:
raise plugin.SplatPluginError, "Failed to create output file: %s" % e
# Wrap the file descriptor
try:
output = os.fdopen(tempFd, 'w')
except Exception, e:
# Highly unlikely
os.unlink(tempPath)
raise plugin.SplatPluginError, "Failed to open output file: %s" % e
# Dump the XML
try:
etree.doc.write(output, XML_ENCODING)
output.close()
except Exception, e:
os.unlink(tempPath)
raise plugin.SplatPluginError, "Failed to write to output file: %s" % e
# Set permissions
try:
fstat = os.stat(filePath)
os.chmod(tempPath, stat.S_IMODE(fstat.st_mode))
os.chown(tempPath, fstat.st_uid, fstat.st_gid)
except Exception, e:
os.unlink(tempPath)
raise plugin.SplatPluginError, "Failed to set output permissions: %s" % e
# Atomicly replace the old file
try:
os.rename(tempPath, filePath)
except Exception, e:
os.unlink(tempPath)
raise plugin.SplatPluginError, "Failed to rename output file: %s" % e
def _finishUsers (self):
# Open up the OpenNMS user database.
try:
userdb = Users(self.usersFile)
except Exception, e:
raise plugin.SplatPluginError, "Failed to open %s: %s" % (self.usersFile, e)
# User Update/Insert Pass: Iterate over each user in the LDAP result set.
# If they currently exist in the OpenNMS db, update their record.
# If they do not exist in the OpenNMS db, add their record.
cur = self.db.cursor()
cur.row_factory = _sqlite_dict_factory
cur.execute("SELECT * from Users")
for ldapRecord in cur:
user = userdb.findUser(ldapRecord[OU_USERNAME])
if (user == None):
user = userdb.createUser(ldapRecord[OU_USERNAME])
# Clean up the result for use as arguments
del ldapRecord[OU_USERNAME]
del ldapRecord[OU_LDAP_DN]
userdb.updateUser(user, **ldapRecord)
# User Deletion pass. For each user in the OpenNMS db, check if they
# are to be found in the LDAP result so. If not, clear out
# their record.
for user in userdb.getUsers():
userId = user.find("user-id")
if (userId == None):
logger.error("Corrupt OpenNMS user record, missing user-id: %s" % ElementTree.tostring(user))
cur = self.db.cursor()
cur.execute("SELECT COUNT(*) FROM Users WHERE userName=?", (userId.text,))
if (cur.fetchone()[0] == 0):
userdb.deleteUser(userId.text)
self._writeXML(userdb, self.usersFile)
def _finishGroups (self):
try:
groupdb = Groups(self.groupsFile)
except Exception, e:
raise plugin.SplatPluginError, "Failed to open %s: %s" % (self.groupsFile, e)
# Group Update/Insert Pass: Iterate over each group in the LDAP result set.
# If it currently exists in the OpenNMS db, update the record.
# If it does not exist in the OpenNMS db, add the record.
groupCursor = self.db.cursor()
groupCursor.row_factory = _sqlite_dict_factory
groupCursor.execute("SELECT * from Groups")
for ldapRecord in groupCursor:
groupName = ldapRecord[OG_GROUPNAME]
group = groupdb.findGroup(groupName)
if (group == None):
group = groupdb.createGroup(groupName)
# Set group members
memberCursor = self.db.cursor()
memberCursor.row_factory = _sqlite_dict_factory
memberCursor.execute("SELECT userName FROM GroupMembers WHERE groupName = ?", (groupName,))
groupMembers = []
for member in memberCursor:
groupMembers.append(member[OU_USERNAME])
groupdb.setMembers(group, groupMembers)
# Group deletion pass. For each group in the OpenNMS db, check if it
# is to be found in the LDAP result so. If not, clear out
# the record.
for group in groupdb.getGroups():
groupName = group.find("name")
if (groupName == None):
logger.error("Corrupt OpenNMS group record, missing name: %s" % ElementTree.tostring(group))
countCursor = self.db.cursor()
countCursor.execute("SELECT COUNT(*) FROM Groups WHERE groupName=?", (groupName.text,))
if (countCursor.fetchone()[0] == 0):
groupdb.deleteGroup(groupName.text)
self._writeXML(groupdb, self.groupsFile)
def finish (self):
# If something terrible happened, don't overwrite the user XML file
if (self.fatalError):
return
# If no work was done, there won't be a users file
if (self.usersFile == None):
return
# User pass
self._finishUsers()
# Group pass
self._finishGroups()
class Users (object):
def __init__ (self, path):
self.doc = ElementTree.ElementTree(file = path)
def findUser (self, username):
for entry in self.getUsers():
userId = entry.find("user-id")
if (userId != None and userId.text == username):
return entry
# Not found
return None
def _getUsersElement (self):
# Retrieve the <users> element
return self.doc.find("./{%s}users" % (XML_USERS_NAMESPACE))
@classmethod
def _setChildElementText (self, parentNode, nodeName, text):
node = parentNode.find(nodeName)
node.text = text
@classmethod
def _setContactInfo (self, parentNode, contactType, info, serviceProvider = None):
node = self._findUserContact(parentNode, contactType)
node.set("info", info)
if (serviceProvider != None):
node.set("serviceProvider", serviceProvider)
@classmethod
def _findUserContact (self, parentNode, contactType):
nodes = parentNode.findall("./{%s}contact" % (XML_USERS_NAMESPACE))
for node in nodes:
if (node.get("type") == contactType):
return node
return None
def getUsers (self):
"""
Returns an iterator over all user elements
"""
return self.doc.findall("./{%s}users/*" % (XML_USERS_NAMESPACE))
def deleteUser (self, username):
user = self.findUser(username)
if (user == None):
raise NoSuchUserException("Could not find user %s." % username)
users = self._getUsersElement()
users.remove(user)
def createUser (self, username, fullName = "", comments = "", password = "XXX"):
"""
Insert and return a new user record.
@param username User's login name
@param fullName User's full name.
@param comments User comments.
@param password User's password (unused if LDAP auth is enabled)
"""
if (self.findUser(username) != None):
raise UserExistsException("User %s exists." % username)
# Create the user record
user = ElementTree.SubElement(self._getUsersElement(), "{%s}user" % XML_USERS_NAMESPACE)
# Set up the standard user data
userId = ElementTree.SubElement(user, "user-id")
userId.text = username
fullName = ElementTree.SubElement(user, "full-name")
fullName.text = fullName
userComments = ElementTree.SubElement(user, "user-comments")
userComments.text = comments
userPassword = ElementTree.SubElement(user, "password")
userPassword.text = password
# Add the required (blank) contact records
# E-mail
ElementTree.SubElement(user, "{%s}contact" % XML_USERS_NAMESPACE, type="email", info="")
# Pager E-mail
ElementTree.SubElement(user, "{%s}contact" % XML_USERS_NAMESPACE, type="pagerEmail", info="")
# Jabber Address
ElementTree.SubElement(user, "{%s}contact" % XML_USERS_NAMESPACE, type="xmppAddress", info="")
# Numeric Pager
ElementTree.SubElement(user, "{%s}contact" % XML_USERS_NAMESPACE, type="numericPage", info="", serviceProvider="")
# Text Pager
ElementTree.SubElement(user, "{%s}contact" % XML_USERS_NAMESPACE, type="textPage", info="", serviceProvider="")
return user
def updateUser (self, user, fullName = None, comments = None, email = None,
pagerEmail = None, xmppAddress = None, numericPager = None, numericPagerService = None,
textPager = None, textPagerService = None):
"""
Update a user record.
<user>
<user-id xmlns="">admin</user-id>
<full-name xmlns="">Administrator</full-name>
<user-comments xmlns="">Default administrator, do not delete</user-comments>
<password xmlns="">xxxx</password>
<contact type="email" info=""/>
<contact type="pagerEmail" info=""/>
<contact type="xmppAddress" info=""/>
<contact type="numericPage" info="" serviceProvider=""/>
<contact type="textPage" info="" serviceProvider=""/>
</user>
@param user: User XML node to update.
@param fullName: User's full name.
@param comments: User comments.
@param email: User's e-mail address.
@param pagerEmail: User's pager e-mail address.
@param xmppAddress: User's Jabber address.
@param numericPager: User's numeric pager. (number, service) tuple.
@param textPager: User's text pager. (number, service) tuple.
"""
if (fullName != None):
self._setChildElementText(user, "full-name", fullName)
if (comments != None):
self._setChildElementText(user, "user-comments", comments)
if (email != None):
self._setContactInfo(user, "email", email)
if (pagerEmail != None):
self._setContactInfo(user, "pagerEmail", pagerEmail)
if (xmppAddress != None):
self._setContactInfo(user, "xmppAddress", xmppAddress)
if (numericPager != None):
self._setContactInfo(user, "numericPage", numericPager, numericPagerService)
if (textPager != None):
self._setContactInfo(user, "textPager", textPager, textPagerService)
class Groups (object):
"""
<?xml version="1.0" encoding="UTF-8"?>
<groupinfo xmlns="http://xmlns.opennms.org/xsd/groups"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:type="groupinfo">
<ns1:header xmlns:ns1="http://xmlns.opennms.org/xsd/types">
<rev xmlns="">1.3</rev>
<created xmlns="">Monday, May 7, 2007 9:57:05 PM GMT</created>
<mstation xmlns="">dhcp-219.internal.opennms.org</mstation>
</ns1:header>
<groups>
<group>
<name xmlns="">Admin</name>
<comments xmlns="">The administrators</comments>
<user xmlns="">admin</user>
<user xmlns="">landonf</user>
</group>
</groups>
</groupinfo>
"""
def __init__ (self, path):
self.doc = ElementTree.ElementTree(file = path)
def getGroups (self):
return self.doc.findall("./{%s}groups/*" % (XML_GROUPS_NAMESPACE))
def findGroup (self, groupName):
for entry in self.getGroups():
groupId = entry.find("name")
if (groupId != None and groupId.text == groupName):
return entry
# Not found
return None
def _getGroupsElement (self):
return self.doc.find("./{%s}groups" % (XML_GROUPS_NAMESPACE))
def createGroup (self, groupName, comments = ""):
"""
Insert and return a new group record.
@param groupName Group name.
@param comments Group comments.
"""
if (self.findGroup(groupName) != None):
raise GroupExistsException("Group %s exists." % groupName)
# Create the group record
group = ElementTree.SubElement(self._getGroupsElement(), "group")
# Set up the standard group data
groupId = ElementTree.SubElement(group, "name", xmlns="")
groupId.text = groupName
groupComments = ElementTree.SubElement(group, "comments", xmlns="")
groupComments.text = comments
return group
def deleteGroup (self, groupName):
group = self.findGroup(groupName)
if (group == None):
raise NoSuchUserException("Could not find group %s." % groupName)
groups = self._getGroupsElement()
groups.remove(group)
def setMembers (self, group, members):
"""
Set a groups' members.
@param group Group XML node to update.
@param members A list of member names.
"""
# Delete existing user entries
entries = group.findall("./user")
for entry in entries:
group.remove(entry)
# Add new user entries
for member in members:
entry = ElementTree.SubElement(group, "user", xmlns="")
entry.text = member
|
A claim can be made for wrongful death when the death is caused by the wrongful act or omission of any person or corporation. Wrongful death claims may be brought as a result of a variety of different circumstances, including car accidents, construction accidents, defective products, semitruck accidents, rollover accidents, motorcycle accidents, asbestos exposure and medical negligence. The statute of limitations in Minnesota for wrongful death claims is three years from the date of death.
What Kinds Of Damages Are Available For Wrongful Death Claims?
Claims for wrongful death are brought by the next of kin of the victim, which generally include a spouse, children, parents and siblings. Claims can be made for income loss, health benefits, loss of the relationship as well as claims for medical expenses and funeral expenses.
The death of a loved one is a tragic loss and though financial compensation can never replace the loss, the attorneys at Sieben Polk P.A. will fight vigorously on behalf of the survivors. The Sieben Polk P.A. wrongful death attorneys have successfully represented and fought for clients for over 50 years on wrongful death claims that have resulted in large wrongful death settlements and verdicts.
It is important to act quickly after an accidental death to allow for a prompt investigation and evaluation of the facts related to the tragic loss. Call 651-304-6708 or toll free at 800-620-1829 today for a free consultation with one of our experienced Minnesota wrongful death attorneys/lawyers so we can properly ensure that all of the compensation that you are entitled to pursue can be obtained for you.
|
from ..model import ik,types,config
from ..math import vectorops
from ..robotsim import IKSolver,IKObjective
from ..io import loader
import time
import random
from ..math import optimize,symbolic,symbolic_klampt,so3,se3
import numpy as np
class KlamptVariable:
"""
Attributes:
name (str): the Klamp't item's name
type (str): the Klamp't item's type
encoding (str): the way in which the item is encoded in the optimization
variables (list of Variable): the list of Variables encoding this Klamp't item
expr (Expression): the Expression that will be used to replace the symbolic mainVariable via
appropriate variables
constraints, encoder, decoder: internally used
"""
def __init__(self,name,type):
self.name = name
self.type = type
self.encoding = None
self.variables = None
self.expr = None
self.constraints = []
self.encoder = None
self.decoder = None
def bind(self,obj):
"""Binds all Variables associated with this to the value of Klamp't object obj"""
if self.type in ['Config','Vector','Vector3','Point']:
self.variables[0].bind(obj)
elif self.type == 'Configs':
assert len(obj) == len(self.variables),"Invalid number of configs in Configs object"
for i,v in enumerate(obj):
self.variables[i].bind(v)
elif self.type == 'Rotation':
if self.encoder is None:
self.variables[0].bind(obj)
else:
self.variables[0].bind(self.encoder(obj))
elif self.type == 'RigidTransform':
if self.encoder is None:
self.variables[0].bind(obj[0])
self.variables[1].bind(obj[1])
else:
T = self.encoder(obj)
self.variables[0].bind(T[0])
self.variables[1].bind(T[1])
else:
raise ValueError("Unsupported object type "+self.type)
def getParams(self):
"""Returns the list of current parameters bound to the symbolic Variables."""
if len(self.variables) > 1:
return [v.value for v in self.variables]
else:
return self.variables[0].value
def getValue(self):
"""Returns the Klamp't value corresponding to the current bound parameters."""
return self.decode(self.getParams())
def unbind(self):
"""Unbinds all Variables associated with this."""
for v in self.variables:
v.unbind()
def encode(self,obj):
"""Returns the parameters giving the encoding of the Klamp't object obj"""
if self.encoder is None:
return obj
else:
return self.encoder(obj)
def decode(self,params):
"""Returns the Klamp't object given a parameters encoding it"""
if self.decoder is None:
return params
else:
return self.decoder(params)
class RobotOptimizationProblem(optimize.OptimizationProblemBuilder):
"""Defines a generalized optimization problem for a robot, which is a subclass of
OptimizationProblemBuilder. This may easily incorporate IK constraints, and may
have additional specifications of active DOF.
Attributes:
robot (RobotModel) the robot whose configuration is being optimized
world (WorldModel, optional): the world containing possible obstacles
context (KlamptContext, inherited): a symbolic.KlamptContext that stores the variable q
denoting the robot configuration, as well as any user data. User data "robot" and "world"
are available by default.
q (Variable): the primary optimization variable.
activeDofs (list): the list of active robot DOFs.
autoLoad (dict): a dictionary of (userDataName:fileName) pairs that are stored so that user data
is automatically loaded from files. I.e., upon self.loadJson(), for each pair in autoLoad
the command self.context.userData[userDataName] = loader.load(fileName) is executed.
managedVariables (dict of KlamptVariable): a dictionary of KlamptVariables like rotations and
rigid transforms.
Managed variables should be referred to in parsed expressions with the prefix @name,
and are encoded into optimization form and decoded from optimization form
using KlamptVariable.bind / KlamptVariable.unbind. You can also retrieve the Klampt value
by KlamptVariable.getValue().
If you would like to find the configuration *closest* to solving the
IK constraints, either add the IK constraints one by one with weight=1 (or some other
numeric value), or call enableSoftIK() after the constraints have been added. In this
case, solve will always return a solution, as long as it finds a configuration that
passes the feasibility tests. The optimization method changes so that it 1) optimizes
the residual norm, and then 2) optimizes the cost function to maintain the residual
norm at its current value. In other words, minimizing error is the first priority and
minimizing cost is the second priority.
"""
def __init__(self,robot=None,world=None,*ikgoals):
self.robot = robot
self.world = world
if self.world is not None and robot is None and self.world.numRobots() > 0:
robot = self.world.robot(0)
self.robot = robot
context = symbolic_klampt.KlamptContext()
context.addUserData("robot",self.robot)
if self.world:
context.addUserData("world",self.world)
optimize.OptimizationProblemBuilder.__init__(self,context)
self.activeDofs = None
self.autoLoad = dict()
nlinks = robot.numLinks() if robot is not None else None
self.q = self.context.addVar('q','V',nlinks)
self.managedVariables = dict()
self.optimizationVariables = [self.q]
self.setJointLimits()
for goal in ikgoals:
self.addIKObjective(goal)
def isIKObjective(self,index):
"""Returns True if the indexed constraint is an IKObjective"""
if self.objectives[index].type != "eq":
return False
return symbolic.is_op(self.objectives[index].expr,'ik.residual')
def getIKObjective(self,index):
"""Returns the IKObjective the indexed constraint is an IKObjective"""
res = self.objectives[index].expr.args[0]
assert isinstance(res,symbolic.ConstantExpression) and isinstance(res.value,IKObjective),"Not an IK objective: "+str(self.objectives[index].expr)
return res.value
def enableSoftIK(self,enabled=True):
"""Turns on soft IK solving. This is the same as hard IK solving if all
constraints can be reached, but if the constraints cannot be reached, it will
try to optimize the error.
"""
for i,o in enumerate(self.objective):
if self.isIKObjective(i):
o.soft = not o.soft
def addIKObjective(self,obj,weight=None):
"""Adds a new IKObjective to the problem. If weight is not None, it is
added as a soft constraint."""
assert isinstance(obj,IKObjective)
self.addEquality(self.context.ik.residual(obj,self.context.setConfig("robot",self.q)),weight)
if hasattr(obj,'robot'):
if self.robot is None:
self.robot = obj.robot
else:
assert self.robot.index == obj.robot.index,"All objectives must be on the same robot"
def addUserData(self,name,fn):
"""Adds an auto-loaded userData. Raises an exception if fn cannot be loaded.
Arguments:
- name: the name of the userData.
- fn: the file from which it is loaded. It must be loadable with loader.load.
"""
assert isinstance(fn,str)
obj = loader.load(fn)
self.context.addUserData(name,obj)
self.autoLoad[name] = fn
def addKlamptVar(self,name,type=None,initialValue=None,encoding='auto',constraints=True,optimize=True):
"""Adds one or more variables of a given Klamp't type (e.g., "Config", "Rotation", "RigidTransform").
If necessary, constraints on the object will also be added, e.g., joint limits, or a quaternion unit
norm constraint.
At least one of type / initialValue must be provided.
Args:
name (str): a name for the variable.
type (str, optional): a supported variable type (default None determines the type by initialValue).
Supported types include "Config", "Configs", Rotation", "RigidTransform", "Vector3". Future
work may support Trajectory and other types.
initialValue (optional): the configuration of the variable. If it's a float, the type will be set to
numeric, if it's a list it will be set to a vector, or if its a supported object, the type will
be set appropriately and config.getConfig(initialValue) will be used for its parameter setting.
encoding (str, optional): only supported for Rotation and RigidTransform types, and defines how the
variable will be parameterized in optimization. Can be:
- 'rotation_vector' (default) for rotation vector, 3 parameters
- 'quaternion' for quaternion encoding, 4 parameters + 1 constraint
- 'rpy' for roll-pitch-yaw euler angles, 3 parameters
- None for full rotation matrix (9 parameters, 6 constraints)
- 'auto' (equivalent to to 'rotation_vector')
constraints (bool, optional): True if all default constraints are to be added. For Config / Configs
types, bound constraints at the robot's joint limits are added.
optimize (bool, optional): If True, adds the parameterized variables to the list of optimization
variables.
Returns:
KlamptVariable: an object containing information about the encoding of the variable.
Note that extra symbolic Variable names may be decorated with extensions in the form of "_ext" if
the encoding is not direct.
"""
if type is None:
assert initialValue is not None,"Either type or initialValue must be provided"
type = types.objectToTypes(initialValue)
if type in ['Vector3','Point']:
if initialValue is None:
initialValue = [0.0]*3
else:
assert len(initialValue)==3
type = 'Vector'
def default(name,value):
v = self.context.addVar(name,"V",len(value))
v.value = value[:]
return v
if name in self.managedVariables:
raise ValueError("Klamp't variable name "+name+" already defined")
kv = KlamptVariable(name,type)
if type == 'Config':
if initialValue is None:
initialValue = self.robot.getConfig()
else:
assert len(initialValue) == self.robot.numLinks()
v = default(name,initialValue)
if constraints:
self.setBounds(v.name,*self.robot.getJointLimits())
kv.constraints = [self.robot.getJointLimits()]
elif type == 'Vector':
assert initialValue is not None,"Need to provide initialValue for "+type+" type variables"
v = default(name,initialValue)
kv.expr = VariableExpression(v)
elif type == 'Configs':
assert initialValue is not None,"Need to provide initialValue for "+type+" type variables"
vals = []
for i,v in enumerate(initialValue):
vals.append(default(name+"_"+str(i),v))
if constraints:
self.setBounds(vals[-1].name,*self.robot.getJointLimits())
kv.constraints.append(self.robot.getJointLimits())
kv.variables = vals
kv.expr = symbolic.list_(*vals)
elif type == 'Rotation':
if encoding == 'auto': encoding='rotation_vector'
if encoding == 'rotation_vector':
if initialValue is not None:
initialValue2 = so3.rotation_vector(initialValue)
else:
initialValue = so3.identity()
initialValue2 = [0.0]*3
v = default(name+"_rv",initialValue2)
kv.expr = self.context.so3.from_rotation_vector(v)
kv.decoder = so3.from_rotation_vector
kv.encoder = so3.rotation_vector
elif encoding == 'quaternion':
if initialValue is not None:
initialValue2 = so3.quaternion(initialValue)
else:
initialValue = so3.identity()
initialValue2 = [1,0,0,0]
v = default(name+"_q",initialValue2)
kv.expr = self.context.so3.from_quaternion(v)
kv.decoder = so3.from_quaternion
kv.encoder = so3.quaternion
if constraints:
f = self.addEquality(self.context.so3.quaternion_constraint(v))
f.name = name+"_q_constraint"
kv.constraints = [f]
elif encoding == 'rpy':
if initialValue is not None:
initialValue2 = so3.rpy(initialValue)
else:
initialValue = so3.identity()
initialValue2 = [0.0]*3
v = default(name+"_rpy",initialValue2)
kv.expr = self.context.so3.from_rpy(v)
kv.decoder = so3.from_rpy
kv.encoder = so3.rpy
elif encoding is None:
if initialValue is None:
initialValue = so3.identity()
v = self.addVar(name,"Vector",initialValue)
if constraints:
f = self.addEquality(self.context.so3.eq_constraint(v))
f.name = name+"_constraint"
kv.constraints = [f]
else:
raise ValueError("Invalid encoding "+str(encoding))
kv.encoding = encoding
elif type == 'RigidTransform':
if initialValue is None:
Ri,ti = None,[0.0]*3
else:
Ri,ti = initialValue
kR = self.addKlamptVar(name+'_R','Rotation',Ri,constraints=constraints,encoding=encoding)
t = default(name+'_t',ti)
kv.variables = kR.variables+[t]
kv.constraints = kR.constraints
kv.expr = symbolic.list_(kR.expr,t)
kv.encoding = encoding
if kR.encoder is not None:
kv.encoder = lambda T:(kR.encoder(T[0]),T[1])
kv.decoder = lambda T:(kR.decoder(T[0]),T[1])
del self.managedVariables[kR.name]
else:
raise ValueError("Unsupported object type "+type)
if kv.variables is None:
kv.variables = [v]
if kv.expr is None:
kv.expr = symbolic.VariableExpression(v)
self.context.addExpr(name,kv.expr)
if optimize:
for v in kv.variables:
self.optimizationVariables.append(v)
self.managedVariables[name] = kv
return kv
def get(self,name,defaultValue=None):
"""Returns a Variable or UserData in the context, or a managed KlamptVariable. If the item
does not exist, defaultValue is returned.
"""
if name in self.managedVariables:
return self.managedVariables[name]
else:
return self.context.get(name,defaultValue)
def rename(self,itemname,newname):
"""Renames a Variable, UserData, or managed KlamptVariable."""
if itemname in self.managedVariables:
item = self.managedVariables[itemname]
del self.managedVariables[itemname]
item.name = newname
print("Renaming KlamptVariable",itemname)
self.context.expressions[newname] = self.context.expressions[itemname]
del self.context.expressions[itemname]
for var in item.variables:
varnewname = newname + var.name[len(itemname):]
print(" Renaming internal variable",var.name,"to",varnewname)
if var.name in self.variableBounds:
self.variableBounds[varnewname] = self.variableBounds[var.name]
del self.variableBounds[var.name]
self.context.renameVar(var,varnewname)
self.managedVariables[newname] = item
elif itemname in self.context.userData:
self.context.renameUserData(itemname,newname)
else:
var = self.context.variableDict[itemname]
if var.name in self.variableBounds:
self.variableBounds[newname] = self.variableBounds[var.name]
del self.variableBounds[var.name]
self.context.renameVar(var,newname)
def setActiveDofs(self,links):
"""Sets the list of active DOFs. These may be indices, RobotModelLinks, or strings."""
self.activeDofs = []
for v in links:
if isinstance(v,str):
self.activeDofs.append(self.robot.link(v).index)
elif isinstance(v,RobotModelLink):
self.activeDofs.append(v.index)
else:
assert isinstance(v,int)
self.activeDofs.append(v)
def enableDof(self,link):
"""Enables an active DOF. If this is the first time enableDof is called,
this initializes the list of active DOFs to the single link. Otherwise
it appends it to the list. (By default, all DOFs are enabled)"""
if isinstance(link,str):
link = self.robot.link(link).index
elif isinstance(link,RobotModelLink):
self.activeDofs.append(link.index)
else:
assert isinstance(link,int)
if self.activeDofs is None:
self.activeDofs = [link]
else:
if link not in self.activeDofs:
self.activeDofs.append(link)
def disableJointLimits(self):
"""Disables joint limits. By default, the robot's joint limits are
used."""
self.setBounds("q",None,None)
def setJointLimits(self,qmin=None,qmax=None):
"""Sets the joint limits to the given lists qmin,qmax. By default,
the robot's joint limits are used."""
if qmin is None:
self.setBounds("q",*self.robot.getJointLimits())
return
#error checking
assert(len(qmin)==len(qmax))
if len(qmin)==0:
#disabled bounds
self.setBounds("q",None,None)
else:
if self.activeDofs is not None:
assert(len(qmin)==len(self.activeDofs))
raise NotImplementedError("What to do when you set joint limits on a subset of DOFS?")
else:
if self.robot is not None:
assert(len(qmin) == self.robot.numLinks())
self.setBounds("q",qmin,qmax)
def inJointLimits(self,q):
"""Returns True if config q is in the currently set joint limits."""
qmin,qmax = self.variableBounds.get('q',self.robot.getJointLimits())
if len(qmin) == 0:
return True
if len(qmin) > 0:
for v,a,b in zip(q,qmin,qmax):
if v < a or v > b:
return False
return True
def toJson(self,saveContextFunctions=False,prettyPrintExprs=False):
res = optimize.OptimizationProblemBuilder.toJson(self,saveContextFunctions,prettyPrintExprs)
if self.activeDofs is not None:
res['activeDofs'] = self.activeDofs
if len(self.managedVariables) > 0:
varobjs = []
for (k,v) in self.managedVariables.items():
varobj = dict()
assert k == v.name
varobj['name'] = v.name
varobj['type'] = v.type
varobj['encoding'] = v.encoding
varobjs.append(varobj)
res['managedVariables'] = varobjs
if len(self.autoLoad) > 0:
res['autoLoad'] = self.autoLoad
return res
def fromJson(self,obj,doAutoLoad=True):
"""Loads from a JSON-compatible object.
Args:
obj: the JSON-compatible object
doAutoLoad (bool, optional): if True, performs the auto-loading step. An IOError is raised if any
item can't be loaded.
"""
optimize.OptimizationProblemBuilder.fromJson(self,obj)
if 'activeDofs' in obj:
self.activeDofs = obj['activeDofs']
else:
self.activeDofs = None
assert 'q'in self.context.variableDict,'Strange, the loaded JSON file does not have a configuration q variable?'
self.q = self.context.variableDict['q']
if 'managedVariables' in obj:
self.managedVariables = dict()
for v in obj['managedVariables']:
name = v['name']
type = v['type']
encoding = v['encoding']
raise NotImplementedError("TODO: load managed variables from disk properly")
self.managedVariables[name] = self.addKlamptVar(name,type,encoding)
if doAutoLoad:
self.autoLoad = obj.get('autoLoad',dict())
for (name,fn) in self.autoLoad.items():
try:
obj = loader.load(fn)
except Exception:
raise IOError("Auto-load item "+name+": "+fn+" could not be loaded")
self.context.addUserData(name,obj)
def solve(self,params=optimize.OptimizerParams()):
"""Locally or globally solves the given problem (using the robot's current configuration
as a seed if params.startRandom=False). Returns the solution configuration or
None if failed.
Args:
params (OptimizerParams, optional): configures the optimizer.
"""
if len(self.objectives) == 0:
print("Warning, calling solve without setting any constraints?")
return self.robot.getConfig()
robot = self.robot
solver = IKSolver(robot)
for i,obj in enumerate(self.objectives):
if self.isIKObjective(i):
ikobj = self.getIKObjective(i)
ikobj.robot = self.robot
solver.add(ikobj)
if self.activeDofs is not None:
solver.setActiveDofs(self.activeDofs)
ikActiveDofs = self.activeDofs
if 'q' in self.variableBounds:
solver.setJointLimits(*self.variableBounds['q'])
qmin,qmax = solver.getJointLimits()
if len(qmin)==0:
qmin,qmax = self.robot.getJointLimits()
backupJointLimits = None
if self.activeDofs is None:
#need to distinguish between dofs that affect feasibility vs IK
ikActiveDofs = solver.getActiveDofs()
if any(obj.type != 'ik' for obj in self.objectives):
activeDofs = [i for i in range(len(qmin)) if qmin[i] != qmax[i]]
activeNonIKDofs = [i for i in activeDofs if i not in ikActiveDofs]
ikToActive = [activeDofs.index(i) for i in ikActiveDofs]
else:
activeDofs = ikActiveDofs
nonIKDofs = []
ikToActive = list(range(len(activeDofs)))
else:
activeDofs = ikActiveDofs
activeNonIKDofs = []
ikToActive = list(range(len(ikActiveDofs)))
anyIKProblems = False
anyCosts = False
softIK = False
for obj in self.objectives:
if obj.type == 'ik':
anyIKProblems = True
if obj.soft:
softIK = True
elif obj.type == 'cost' or obj.soft:
anyCosts = True
#sample random start point
if params.startRandom:
self.randomVarBinding()
solver.sampleInitial()
if len(activeNonIKDofs)>0:
q = robot.getConfig()
for i in activeNonIKDofs:
q[i] = random.uniform(qmin[i],qmax[i])
robot.setConfig(q)
if params.localMethod is not None or params.globalMethod is not None or (anyCosts or not anyIKProblems):
#set up optProblem, an instance of optimize.Problem
assert self.optimizationVariables[0] is self.q
if len(activeDofs) < self.robot.numLinks():
#freeze those inactive DOFs
q = self.robot.getConfig()
backupJointLimits = qmin[:],qmax[:]
inactiveDofs = set(range(len(q))) - set(activeDofs)
for i in inactiveDofs:
qmin[i] = q[i]
qmax[i] = q[i]
self.setBounds("q",qmin,qmax)
reducedProblem,reducedToFullMapping,fullToReducedMapping = self.preprocess()
optq = reducedProblem.context.variableDict['q']
print("Preprocessed problem:")
reducedProblem.pprint()
optProblem = reducedProblem.getProblem()
assert backupJointLimits is not None
self.setBounds("q",*backupJointLimits)
else:
optq = self.q
optProblem = self.getProblem()
reducedToFullMapping = fullToReducedMapping = None
#optProblem is now ready to use
if params.globalMethod is not None:
#set seed = robot configuration
if self.q.value is None:
self.q.bind(robot.getConfig())
if reducedToFullMapping is None:
x0 = self.getVarVector()
else:
for var,vexpr in zip(reducedProblem.optimizationVariables,fullToReducedMapping):
var.bind(vexpr.eval(self.context))
x0 = reducedProblem.getVarVector()
#do global optimization of the cost function and return
(succ,res) = params.solve(optProblem,x0)
if not succ:
print("Global optimize returned failure")
return None
if reducedToFullMapping is not None:
reducedProblem.setVarVector(res)
for var,vexpr in zip(self.optimizationVariables,reducedToFullMapping):
var.bind(vexpr.eval(reducedProblem.context))
else:
self.setVarVector(res)
#check feasibility if desired
if not self.inJointLimits(self.q.value):
print("Result from global optimize is out of joint limits")
return None
if not self.feasibilityTestsPass():
print("Result from global optimize isn't feasible")
return None
if not self.satisfiesEqualities(params.tol):
print("Result from global optimize doesn't satisfy tolerance.")
return None
#passed
print("Global optimize succeeded! Cost",self.cost())
q = self.q.value
return q
if anyIKProblems:
print("Performing random-restart newton raphson")
#random-restart newton-raphson
solver.setMaxIters(params.numIters)
solver.setTolerance(params.tol)
best = None
bestQuality = float('inf')
if softIK:
#quality is a tuple
bestQuality = bestQuality,bestQuality
quality = None
for restart in range(params.numRestarts):
if time.time() - t0 > params.timeout:
return best
t0 = time.time()
res = solver.solve()
if res or self.softObjectives:
q = robot.getConfig()
print("Got a solve, checking feasibility...")
#check feasibility if desired
t0 = time.time()
self.q.bind(q)
if not self.feasibilityTestsPass():
print("Failed feasibility")
#TODO: resample other non-robot optimization variables
if len(nonIKDofs) > 0:
u = float(restart+0.5)/params.numRestarts
q = robot.getConfig()
#perturbation sampling for non-IK dofs
for i in nonIKDofs:
delta = u*(qmax[i]-qmin[i])*0.5
q[i] = random.uniform(max(q[i]-delta,qmin[i]),min(q[i]+delta,qmax[i]))
robot.setConfig(q)
self.q.bind(q)
if not self.feasibilityTestsPass():
solver.sampleInitial()
continue
else:
solver.sampleInitial()
continue
print("Found a feasible config")
if softIK:
residual = solver.getResidual()
ikerr = max(abs(v) for v in residual)
if ikerr < params.tol:
ikerr = 0
else:
#minimize squared error
ikerr = vectorops.normSquared(residual)
if not anyCosts:
cost = 0
if ikerr == 0:
#feasible and no cost
return q
else:
cost = self.cost()
quality = ikerr,cost
else:
if not anyCosts:
#feasible, no costs, so we're done
print("Feasible and no costs, we're done")
return q
else:
#optimize
quality = self.cost(q)
print("Quality of solution",quality)
if quality < bestQuality:
best = self.getVarValues()
bestQuality = quality
#sample a new ik seed
solver.sampleInitial()
if best is None or params.localMethod is None:
return best[0]
print("Performing post-optimization")
#post-optimize using local optimizer
self.setVarValues(best)
if softIK:
if not self.satisfiesEqualities(params.tol):
raise NotImplementedError("TODO: add soft IK inequality constraint |ik residual| <= |current ik residual|")
optSolver = optimize.LocalOptimizer(method=params.localMethod)
if reducedToFullMapping is not None:
for var,vexpr in zip(reducedProblem.optimizationVariables,fullToReducedMapping):
var.bind(vexpr.eval(self.context))
x0 = reducedProblem.getVarVector()
else:
x0 = self.getVarVector()
optSolver.setSeed(x0)
res = optSolver.solve(optProblem,params.numIters,params.tol)
if res[0]:
if reducedToFullMapping is not None:
reducedProblem.setVarVector(res[1])
for var,vexpr in zip(self.optimizationVariables,reducedToFullMapping):
var.bind(vexpr.eval(reducedProblem.context))
else:
self.setVarVector(res[1])
#check feasibility if desired
if not self.feasibilityTestsPass():
pass
elif not anyCosts:
#feasible
best = self.getVarValues()
else:
#optimize
quality = self.cost()
if quality < bestQuality:
#print "Optimization improvement",bestQuality,"->",quality
best = self.getVarValues()
bestQuality = quality
elif quality > bestQuality + 1e-2:
print("Got worse solution by local optimizing?",bestQuality,"->",quality)
self.getVarValues(best)
print("Resulting quality",bestQuality)
return best[0]
else:
#no IK problems, no global method set -- for now, just perform random restarts
#
#set seed = robot configuration
if self.q.value is None:
self.q.bind(robot.getConfig())
if reducedToFullMapping is None:
x0 = self.getVarVector()
else:
for var,vexpr in zip(reducedProblem.optimizationVariables,fullToReducedMapping):
var.bind(vexpr.eval(self.context))
x0 = reducedProblem.getVarVector()
#do global optimization of the cost function and return
print("Current optimization variable vector is",x0)
(succ,res) = params.solve(optProblem,x0)
if not succ:
print("Global optimize returned failure")
return None
if reducedToFullMapping is not None:
reducedProblem.setVarVector(res)
for var,vexpr in zip(self.optimizationVariables,reducedToFullMapping):
var.bind(vexpr.eval(reducedProblem.context))
else:
self.setVarVector(res)
#check feasibility if desired
if not self.inJointLimits(self.q.value):
print("Result from global optimize is out of joint limits")
return None
if not self.feasibilityTestsPass():
print("Result from global optimize isn't feasible")
return None
if not self.satisfiesEqualities(params.tol):
print("Result from global optimize doesn't satisfy tolerance: result %s"%(str(self.equalityResidual()),))
for obj in self.objectives:
if obj.type == 'eq':
print(" ",obj.expr,":",obj.expr.eval(self.context))
return None
#passed
print("Global optimize succeeded! Cost",self.cost())
q = self.q.value
return q
|
Through The Woods [G8722] - $9.98 : Yarn Tree, Your wholesale source for cross stitch supplies.
Cross stitch pattern from Barbara Ana Designs. Look who is coming "Through the Woods," & down Santa Claus lane?!?! Why it's Santa Claus himself! Riding on the back of one of his reindeer, with his bag full of presents, Santa is on a mission! Which house do you think he will arrive at next? Snow is beginning to fall, decorating the tops of the pine trees. Now dash away, dash away, dash away all! Stitch count 57 x 70. Stitched on #4138 Zweigart Belfast Linen 32ct. Raw Linen.
|
#!/usr/bin/env python2
#--Filename----------------------------------------------------------#
# bmu.py #
#--Info--------------------------------------------------------------#
# BlackArch mirrorlist update script #
# Updated: 27/05/2015 #
# The following lines: #
# [blackarch] #
# Include = /etc/pacman.d/blackarch-mirrorlist #
# Must be present in pacman.conf #
# blackarch.sh can be used to setup your pacman.conf correctly #
# Designed for Arch Linux - https://archlinux.org/download/ #
# and BlackArch - http://www.blackarch.org/index.html #
#--Author------------------------------------------------------------#
# JKAD - https://jkad.github.io #
#--Tested------------------------------------------------------------#
# 27/05/2015 - archlinux-2015.05.01-dual.iso #
#--Licence-----------------------------------------------------------#
# MIT Licence: #
# https://github.com/JKAD/Arch-Install-Scripts/blob/master/LICENSE #
#--------------------------------------------------------------------#
import os
import sys
import urllib2
import time
from HTMLParser import HTMLParser
from datetime import datetime
protocol = "://"
class ParseHTML(HTMLParser):
def __init__(self):
HTMLParser.__init__(self)
self.results = []
self.return_data = False
self.href = False
def handle_starttag(self, tag, attrs):
if tag == "img":
for name, value in attrs:
if "flags" in value:
self.return_data = True
if tag == "href":
self.href = True
def handle_data(self, data):
if self.href:
if protocol in data:
self.results.append(data)
self.href = False
self.return_data = False
if self.return_data:
self.results.append(data)
def main():
if not os.geteuid() == 0:
sys.exit('bmu.py must be run as root')
url = "http://blackarch.org/downloads.html"
user_agent = "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 \
(KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36"
FILE = '/etc/pacman.d/blackarch-mirrorlist'
headers = {}
headers["User-Agent"] = user_agent
request = urllib2.Request(url, headers=headers)
response = urllib2.urlopen(request)
page = response.read()
parser = ParseHTML()
parser.feed(page)
parser_results = parser.results
ts = time.time()
timestamp = "# Updated: %s\n\n" \
% datetime.fromtimestamp(ts).strftime('%d-%m-%Y %H:%M:%S')
message = "# Uncomment the mirrors you wish to use\n"
default = "Server = http://mirror.team-cymru.org/blackarch/$repo/os/$arch\n"
with open(FILE, 'w') as overwrite:
overwrite.write(timestamp + message + default)
for i in parser_results:
if i.strip() != "":
if protocol in i:
hash_sym = "#Server = "
ending = "$repo/os/$arch\n"
else:
hash_sym = "\n#"
ending = "\n"
with open(FILE, 'a') as out_file:
out_file.write(hash_sym + i.strip('\n\r\t ') + ending)
print "Mirrorlist updated"
print "Uncomment your prefered mirrors in %s and run pacman -Syy" % FILE
if __name__ == '__main__':
main()
|
The question of what fiduciary duties apply to various forms of business entities and what form such duties take has been addressed with essentially two opposite approaches: (1) to assume that managers of alternative entities such as LLCs have, by default, all of the traditional fiduciary duties of corporate directors, or (2) to apply no default duties other than those assigned in the entity’s governing documents. Moreover, different jurisdictions take different positions on the extent to which fiduciary duties of managers (if any) can be modified or eliminated by contract.
To the extent that, at law or inequity, a member or manager or other person has duties (including fiduciary duties) to a limited liability company or to another member or manager or to another person that is party to or is otherwise bound by a limited liability company agreement, the member’s or manager’s or other person’s duties may be expanded or restricted or eliminated by provisions in the limited liability company agreement; provided, that the limited liability company agreement may not eliminate the implied contractual covenant of good faith and fair dealing.
I suggest that although current judicial analysis seems to imply that fiduciary duties engrained in the corporate law readily transfer to limited partnerships and limited liability companies as efficiently and effectively as they do to corporate governance issues, that conclusion is flawed. I conclude that parties to contractual entities such as limited liability partnerships and limited liability companies should be free—given a full, clear disclosure paradigm—to adopt or reject any fiduciary duty obligation by contract. Courts should recognize the parties' freedom of choice exercised by contract and should not superimpose an overlay of common law fiduciary duties, or the judicial scrutiny associated with them, where the parties have not contracted for those governance mechanisms in the documents forming their business entity.
A plain reading of Section 18–1101(c) of the LLC Act is consistent with Section 18–1104 and confirms that default fiduciary duties apply.
The introductory phrase “[t]o the extent that” in Section 18–1101(c) does not imply that the General Assembly was agnostic about the ontological question of whether fiduciary duties exist in limited liability companies. The same phrase appears in the parallel provision in the Delaware Limited Partnership Act (the “LP Act”), 6 Del. C. § 17–1101(d), and there has never been any serious doubt that the general partner of a Delaware limited partnership owes fiduciary duties. As the Chancellor explained in Auriga, the introductory phrase “makes clear that the statute does not itself impose some broader scope of fiduciary coverage than traditional principles of equity.” Put differently, the phrase “[t]o the extent that” embodies efficiency in drafting by the organs of the bar responsible for overseeing the alternative entity statutes and recommending changes to the General Assembly.
A number of states follow similar approaches to the question of default duties. The Massachusetts Limited Liability Company Act assumes that managers have default fiduciary duties. Such duties, however, may be amended and even eliminated by contract via the certificate of organization or written operating agreement.
(a) The fiduciary duties that a member owes to a member-managed limited liability company and the other members of the limited liability company are the duties of loyalty and care under subdivisions (b) and (c).
(1) To account to a limited liability company and hold as trustee for it any property, profit, or benefit derived by the member in the conduct and winding up of the activities of a limited liability company or derived from a use by the member of a limited liability company property, including the appropriation of a limited liability company opportunity.
(2) To refrain from dealing with a limited liability company in the conduct or winding up of the activities of a limited liability company as or on behalf of a party having an interest adverse to a limited liability company.
(3) To refrain from competing with a limited liability company in the conduct or winding up of the activities of the limited liability company.
(c) A member's duty of care to a limited liability company and the other members in the conduct and winding up of the activities of the limited liability company is limited to refraining from engaging in grossly negligent or reckless conduct, intentional misconduct, or a knowing violation of law.
(d) A member shall discharge the duties to a limited liability company and the other members under this title or under the operating agreement and exercise any rights consistent with the obligation of good faith and fair dealing.
(e) A member does not violate a duty or obligation under this article or under the operating agreement merely because the member's conduct furthers the member's own interest.
(1) Subdivisions (a), (b), (c), and (e) apply to the manager or managers and not the members.
(2) Subdivision (d) applies to the members and managers.
(3) Except as otherwise provided, a member does not have any fiduciary duty to the limited liability company or to any other member solely by reason of being a member.
2. The term is an unreasonable means to achieve the provision's objective.
In deciding whether a term is manifestly unreasonable, a court must “make its determination as of the time the challenged term became part of the operating agreement and by considering only circumstances existing at that time.” A court may only invalidate a provision as manifestly unreasonable “if, in light of the purposes and activities of the limited liability company, it is readily apparent that: (a) the objective of the term is unreasonable; or (b) the term is an unreasonable means to achieve the provision's objective.” Given this restrictive standard of review, manifestly unreasonable challenges will likely be a narrow avenue of relief for litigants whose operating agreements bar fiduciary-duty claims.
At the opposite end of the spectrum, Texas courts have never held that fiduciary duties exist by default for managers of an LLC. Rather, whether fiduciary duties exist is a question of fact determined by interpretation of the agreement or agreements governing the company.
Unless otherwise specified, in this chapter the term “managers” includes the holders of analogous in entities other than LLCs, such as general partners in a limited partnership, as well as a member or other equity owner exercising managerial power.
6 Del. C. § 18-1101(c) (emphasis added). See also 6 Del. C. § 17-1101(c), which contains analogous language with respect to limited partnerships; Gotham Partners, L.P. v. Hallwood Realty Partners, L.P., 817 A.2d 160, 163-164 (Del. 2002) (implying that fiduciary duties of a general partner are established by contract rather than by default and that “a limited partnership agreement may provide for contractually created fiduciary duties substantially mirroring traditional fiduciary duties that apply in the corporation law”).
Myron T. Steele, “Judicial Scrutiny of Fiduciary Duties in Delaware Limited Partnerships and Limited Liability Companies,” 32 Del. J. Corp. L. 1 (2007); see also, e.g., Myron T. Steele, “Freedom of Contract and Default Contractual Duties in Delaware Limited Partnerships and Limited Liability Companies,” American Business Law Journal, 46: 221–242 (2009); Lewis H. Lazarus and Justin C. Jowers, “Fiduciary Duties of Managers of LLCs: The Status of the Debate in Delaware,” Business Law Today (February 2012).
Feeley v. NHAOCG, LLC, 62 A.3d 649, 661-62 (Del. Ch. 2012) (citing 6 Del. C. § 18–1104; Auriga Capital Corp. v. Gatz Properties, LLC, 40 A.3d 839 (Del. Ch. 2012), Gatz Properties, LLC v. Auriga Capital Corp., 59 A.3d 1206 (Del. 2012); Metro Ambulance, Inc. v. E. Med. Billing, Inc., 1995 WL 409015, at *2 (Del. Ch. July 5, 1995)); accord, Ross Holding and Management Co., et al. v. Advance Realty Group, LLC, et al., 2014 WL 4374261, at **12-15 (Del. Ch. Sept. 4, 2014). It is further worth noting that Vice Chancellor Laster based much of the relevant portions of the Feeley opinion on dicta contained in then-Chancellor Leo Strine’s decision in Auriga. Chancellor Strine succeeded Steele as Chief Justice of the Delaware Supreme Court on January 29, 2014.
Mass. Gen. Laws 156C § 8.
Knapp v. Neptune Tower Associates, et al., 892 N.E.2d 820, 824 (Mass. App. 2008).
E.g., N.Y. Limit. Liab. Co. § 409(a).
Cal. Corp. Code § 17704.09.
Cal. Corp. Code § 17701.10(e); In re Die Fliedermaus LLC, 323 B.R. 101, 110 (Bankr. S.D.N.Y. 2005).
Carella v. Scholet, 5 A.D.3d (N.Y.A.D. 2004) (dismissing claims for breach of fiduciary duty predicated on self-dealing conduct expressly authorized in a limited partnership agreement, and citing Pace v. Perk, 81 A.D.2d 444 (N.Y.A.D. 1981); Riviera Congress Assoc. v. Yassky, 223 N.E.2d 876 (N.Y.A.D. 1966); Lanier v. Bowdoin, 24 N.E.2d 732 (N.Y.A.D. 1939)).
Rev. Unif. Lim. Liab. Co. Act § 110 (2006).
Fla. Stat. § 605.0105(4)(c). The statute is silent regarding what “other fiduciary duty” may exist.
Fla. Stat. § 605.0105(5); see also Fla. Stat. § 620.1110 (2) (stating that a limited partnership agreement may not eliminate or “unreasonably” modify the duty of loyalty or care).
E.g., N.J. Stat. § 42:2C-11(h); 805 Ill. Comp. Stat. § 180/15-5.
Indeed, this language seems to fly in the face of the well-settled legal principal, articulated by courts throughout the Anglosphere, that the judiciary will not rescue a party from the adverse effects of an inadvisable, but otherwise unenforceable contractual provision, or seek to substitute the court’s notion of reasonableness for the parties’ express agreement. See, e.g., Eurofins Panlabs, Inc. v. Ricerca Biosciences, LLC, 2014 WL 2457515, at *1 (Del. Ch. May 30, 2014); Kostyszyn v. Martuscelli, 2015 WL 721291, at **4-5 (Del. Super. Feb. 18, 2015).
Andrea J. Sullivan and Steven B. Gladis, “New Remedies for LLC Members Oppression and Fiduciary Duties Under the Revised Uniform Limited Liability Company Act,” 287 N.J. Law. 72, 74 (April 2014) (emphasis added and internal citations omitted).
15 Pa. Stat. § 8943(b)(1) (subjecting managers of an LLC to Sections 1711 through 1717 of the corporate statute); see also, e.g., Jarl Investments, L.P. v. Fleck, 937 A.2d 1113, 1122-1123 (Pa. Super. 2007) (imposing default fiduciary duties on general partners of an LP).
15 Pa. Stat. § 8943(b)(1) (emphasis added).
Entertainment Merchandising Technology, L.L.C. v. Houchin, 720 F. Supp. 2d 792, 797 (N.D. Tex. 2010) (citing Gadin v. Societe Captrade, 2009 WL 1704049 (S.D. Tex. June 17, 2009); Suntech Processing Sys., L.L.C. v. Sun Comm., Inc., 2000 WL 1780236 (Tex. App. 2000); Kaspar v. Thorne, 755 S.W.2d 151,155 (Tex. App. 1988)). Note, however, that Texas appears to take a different approach, similar to the RULLCA standard, with respect to partnerships (including limited partnerships). Tex. Bus. Orgs. Code § 152.002.
|
from django.core import meta
from django.models import auth, core
class Comment(meta.Model):
user = meta.ForeignKey(auth.User, raw_id_admin=True)
content_type = meta.ForeignKey(core.ContentType)
object_id = meta.IntegerField('object ID')
headline = meta.CharField(maxlength=255, blank=True)
comment = meta.TextField(maxlength=3000)
rating1 = meta.PositiveSmallIntegerField('rating #1', blank=True, null=True)
rating2 = meta.PositiveSmallIntegerField('rating #2', blank=True, null=True)
rating3 = meta.PositiveSmallIntegerField('rating #3', blank=True, null=True)
rating4 = meta.PositiveSmallIntegerField('rating #4', blank=True, null=True)
rating5 = meta.PositiveSmallIntegerField('rating #5', blank=True, null=True)
rating6 = meta.PositiveSmallIntegerField('rating #6', blank=True, null=True)
rating7 = meta.PositiveSmallIntegerField('rating #7', blank=True, null=True)
rating8 = meta.PositiveSmallIntegerField('rating #8', blank=True, null=True)
# This field designates whether to use this row's ratings in aggregate
# functions (summaries). We need this because people are allowed to post
# multiple reviews on the same thing, but the system will only use the
# latest one (with valid_rating=True) in tallying the reviews.
valid_rating = meta.BooleanField('is valid rating')
submit_date = meta.DateTimeField('date/time submitted', auto_now_add=True)
is_public = meta.BooleanField()
ip_address = meta.IPAddressField('IP address', blank=True, null=True)
is_removed = meta.BooleanField(help_text='Check this box if the comment is inappropriate. A "This comment has been removed" message will be displayed instead.')
site = meta.ForeignKey(core.Site)
class META:
db_table = 'comments'
module_constants = {
# min. and max. allowed dimensions for photo resizing (in pixels)
'MIN_PHOTO_DIMENSION': 5,
'MAX_PHOTO_DIMENSION': 1000,
# option codes for comment-form hidden fields
'PHOTOS_REQUIRED': 'pr',
'PHOTOS_OPTIONAL': 'pa',
'RATINGS_REQUIRED': 'rr',
'RATINGS_OPTIONAL': 'ra',
'IS_PUBLIC': 'ip',
}
ordering = ('-submit_date',)
admin = meta.Admin(
fields = (
(None, {'fields': ('content_type', 'object_id', 'site')}),
('Content', {'fields': ('user', 'headline', 'comment')}),
('Ratings', {'fields': ('rating1', 'rating2', 'rating3', 'rating4', 'rating5', 'rating6', 'rating7', 'rating8', 'valid_rating')}),
('Meta', {'fields': ('is_public', 'is_removed', 'ip_address')}),
),
list_display = ('user', 'submit_date', 'content_type', 'get_content_object'),
list_filter = ('submit_date',),
date_hierarchy = 'submit_date',
search_fields = ('comment', 'user__username'),
)
def __repr__(self):
return "%s: %s..." % (self.get_user().username, self.comment[:100])
def get_absolute_url(self):
return self.get_content_object().get_absolute_url() + "#c" + str(self.id)
def get_crossdomain_url(self):
return "/r/%d/%d/" % (self.content_type_id, self.object_id)
def get_flag_url(self):
return "/comments/flag/%s/" % self.id
def get_deletion_url(self):
return "/comments/delete/%s/" % self.id
def get_content_object(self):
"""
Returns the object that this comment is a comment on. Returns None if
the object no longer exists.
"""
from django.core.exceptions import ObjectDoesNotExist
try:
return self.get_content_type().get_object_for_this_type(pk=self.object_id)
except ObjectDoesNotExist:
return None
get_content_object.short_description = 'Content object'
def _fill_karma_cache(self):
"Helper function that populates good/bad karma caches"
good, bad = 0, 0
for k in self.get_karmascore_list():
if k.score == -1:
bad +=1
elif k.score == 1:
good +=1
self._karma_total_good, self._karma_total_bad = good, bad
def get_good_karma_total(self):
if not hasattr(self, "_karma_total_good"):
self._fill_karma_cache()
return self._karma_total_good
def get_bad_karma_total(self):
if not hasattr(self, "_karma_total_bad"):
self._fill_karma_cache()
return self._karma_total_bad
def get_karma_total(self):
if not hasattr(self, "_karma_total_good") or not hasattr(self, "_karma_total_bad"):
self._fill_karma_cache()
return self._karma_total_good + self._karma_total_bad
def get_as_text(self):
return 'Posted by %s at %s\n\n%s\n\nhttp://%s%s' % \
(self.get_user().username, self.submit_date,
self.comment, self.get_site().domain, self.get_absolute_url())
def _module_get_security_hash(options, photo_options, rating_options, target):
"""
Returns the MD5 hash of the given options (a comma-separated string such as
'pa,ra') and target (something like 'lcom.eventtimes:5157'). Used to
validate that submitted form options have not been tampered-with.
"""
from django.conf.settings import SECRET_KEY
import md5
return md5.new(options + photo_options + rating_options + target + SECRET_KEY).hexdigest()
def _module_get_rating_options(rating_string):
"""
Given a rating_string, this returns a tuple of (rating_range, options).
>>> s = "scale:1-10|First_category|Second_category"
>>> get_rating_options(s)
([1, 2, 3, 4, 5, 6, 7, 8, 9, 10], ['First category', 'Second category'])
"""
rating_range, options = rating_string.split('|', 1)
rating_range = range(int(rating_range[6:].split('-')[0]), int(rating_range[6:].split('-')[1])+1)
choices = [c.replace('_', ' ') for c in options.split('|')]
return rating_range, choices
def _module_get_list_with_karma(**kwargs):
"""
Returns a list of Comment objects matching the given lookup terms, with
_karma_total_good and _karma_total_bad filled.
"""
kwargs.setdefault('select', {})
kwargs['select']['_karma_total_good'] = 'SELECT COUNT(*) FROM comments_karma WHERE comments_karma.comment_id=comments.id AND score=1'
kwargs['select']['_karma_total_bad'] = 'SELECT COUNT(*) FROM comments_karma WHERE comments_karma.comment_id=comments.id AND score=-1'
return get_list(**kwargs)
def _module_user_is_moderator(user):
from django.conf.settings import COMMENTS_MODERATORS_GROUP
if user.is_superuser:
return True
for g in user.get_group_list():
if g.id == COMMENTS_MODERATORS_GROUP:
return True
return False
class FreeComment(meta.Model):
# A FreeComment is a comment by a non-registered user.
content_type = meta.ForeignKey(core.ContentType)
object_id = meta.IntegerField('object ID')
comment = meta.TextField(maxlength=3000)
person_name = meta.CharField("person's name", maxlength=50)
submit_date = meta.DateTimeField('date/time submitted', auto_now_add=True)
is_public = meta.BooleanField()
ip_address = meta.IPAddressField()
# TODO: Change this to is_removed, like Comment
approved = meta.BooleanField('approved by staff')
site = meta.ForeignKey(core.Site)
class META:
db_table = 'comments_free'
ordering = ('-submit_date',)
admin = meta.Admin(
fields = (
(None, {'fields': ('content_type', 'object_id', 'site')}),
('Content', {'fields': ('person_name', 'comment')}),
('Meta', {'fields': ('submit_date', 'is_public', 'ip_address', 'approved')}),
),
list_display = ('person_name', 'submit_date', 'content_type', 'get_content_object'),
list_filter = ('submit_date',),
date_hierarchy = 'submit_date',
search_fields = ('comment', 'person_name'),
)
def __repr__(self):
return "%s: %s..." % (self.person_name, self.comment[:100])
def get_absolute_url(self):
return self.get_content_object().get_absolute_url() + "#c" + str(self.id)
def get_content_object(self):
"""
Returns the object that this comment is a comment on. Returns None if
the object no longer exists.
"""
from django.core.exceptions import ObjectDoesNotExist
try:
return self.get_content_type().get_object_for_this_type(pk=self.object_id)
except ObjectDoesNotExist:
return None
get_content_object.short_description = 'Content object'
class KarmaScore(meta.Model):
user = meta.ForeignKey(auth.User)
comment = meta.ForeignKey(Comment)
score = meta.SmallIntegerField(db_index=True)
scored_date = meta.DateTimeField(auto_now=True)
class META:
module_name = 'karma'
unique_together = (('user', 'comment'),)
module_constants = {
# what users get if they don't have any karma
'DEFAULT_KARMA': 5,
'KARMA_NEEDED_BEFORE_DISPLAYED': 3,
}
def __repr__(self):
return "%d rating by %s" % (self.score, self.get_user())
def _module_vote(user_id, comment_id, score):
try:
karma = get_object(comment__id__exact=comment_id, user__id__exact=user_id)
except KarmaScoreDoesNotExist:
karma = KarmaScore(None, user_id, comment_id, score, datetime.datetime.now())
karma.save()
else:
karma.score = score
karma.scored_date = datetime.datetime.now()
karma.save()
def _module_get_pretty_score(score):
"""
Given a score between -1 and 1 (inclusive), returns the same score on a
scale between 1 and 10 (inclusive), as an integer.
"""
if score is None:
return DEFAULT_KARMA
return int(round((4.5 * score) + 5.5))
class UserFlag(meta.Model):
user = meta.ForeignKey(auth.User)
comment = meta.ForeignKey(Comment)
flag_date = meta.DateTimeField(auto_now_add=True)
class META:
db_table = 'comments_user_flags'
unique_together = (('user', 'comment'),)
def __repr__(self):
return "Flag by %r" % self.get_user()
def _module_flag(comment, user):
"""
Flags the given comment by the given user. If the comment has already
been flagged by the user, or it was a comment posted by the user,
nothing happens.
"""
if int(comment.user_id) == int(user.id):
return # A user can't flag his own comment. Fail silently.
try:
f = get_object(user__id__exact=user.id, comment__id__exact=comment.id)
except UserFlagDoesNotExist:
from django.core.mail import mail_managers
f = UserFlag(None, user.id, comment.id, None)
message = 'This comment was flagged by %s:\n\n%s' % (user.username, comment.get_as_text())
mail_managers('Comment flagged', message, fail_silently=True)
f.save()
class ModeratorDeletion(meta.Model):
user = meta.ForeignKey(auth.User, verbose_name='moderator')
comment = meta.ForeignKey(Comment)
deletion_date = meta.DateTimeField(auto_now_add=True)
class META:
db_table = 'comments_moderator_deletions'
unique_together = (('user', 'comment'),)
def __repr__(self):
return "Moderator deletion by %r" % self.get_user()
|
Security is at the heart of three proposals at the popular vote to be held on 25 September – national security, social security and securing the natural environment.
Albeit in three different areas, the three proposals deal with fundamental security issues. How far should or must the state go to identify and avert threats to national security? What funds should be used to ensure a good old-age pension? And how should we manage our economy in future to safeguard the natural environment, protect resources and reduce environmental pollution? Behind such questions lie the new Intelligence Service Act, the “AHV plus” popular initiative and the “Green economy” popular initiative, which will be put to the vote on 25 September.
The Federal Intelligence Service (FIS) is also to be permitted to infiltrate computers, listen in to telephone calls and bug private property in future, according to the new Intelligence Service Act. This governs the duties but also the limits on and control of the FIS. It provides for new measures for obtaining information – such as monitoring the postal and telecommunications systems – in the fields of terrorism, illegal intelligence activities and attacks on critical infrastructure. The FIS is subject to four-fold control by the bodies of Parliament, the administration and the Federal Council. “The fundamental rights and individual freedom of Swiss citizens are protected by the new law and the sphere of privacy remains untouched as far as possible,” maintains the Swiss government. The law also ensures a “strengthening of internal and external security appropriate to the threat situation”.
The majority of MPs share this view. Some left-wing politicians nevertheless voiced criticism of the proposal during consultations. Paul Rechsteiner, the SP Council of States member from St. Gallen, declared that Switzerland is facing a fundamental decision about whether to provide the FIS with all means of surveillance. An “alliance against the snooping state” – consisting primarily of small, left-wing parties and youth parties – called the referendum against the Intelligence Service Act. Opponents point to the end of privacy: “Everyone is under surveillance, not just criminals as is often claimed. Mass surveillance can be carried out through the tapping of telephone calls, reading of emails, Facebook, WhatsApp and SMS messages as well as the monitoring of the internet through keyword searches regardless of whether there is cause for suspicion,” they contend. The Office of the Attorney General of Switzerland and the cantonal police authorities are already responsible for investigating terrorist activities and organised crime, and that is sufficient, they say.
The Social Democrats officially support the referendum and therefore oppose the law. It is noteworthy that resistance is also emerging in some conservative circles and in the business community. Above all, criticism has been voiced by the IT and telecommunications sectors.
10 % more old-age and survivors’ insurance (AHV)?
The popular initiative “AHV plus”, launched by the Swiss Federation of Trade Unions, calls for a 10 % increase in AHV pensions. Each single person would receive CHF 200 more a month and each married couple CHF 350 more. Those behind the initiative are seeking to give state old-age pension and survivors’ insurance (AHV) more weight in relation to the pension funds. They argue that the pension fund benefits will continue to decrease in future. Reductions of up to 20 % are not uncommon owing to the financial market crisis. “Pension losses need to be rebalanced. The most effective and economical way of achieving this is by increasing AHV pensions by 10 %. This issue is even more pressing because AHV pensions have not risen significantly for decades and increasingly lag behind wage trends the more time goes on,” write the initiative’s authors on their homepage.
An increase of 10 % in pensions would see AHV expenditure climb by four billion Swiss francs a year. The initiative does not reveal how the pension increase would be funded. SP National Councillor Silvia Schenker does not see money as an issue. The pension increase “would cost the employer and employee 0.4 % of salary each”, she says. That is feasible because salary contributions have not risen for 40 years. Conservative politicians take a different view. Urs Schwaller, a former CVP Council of States member from Fribourg, declared that the pension increase called for is “simply not financially viable”. The funding of old-age pensions is a major challenge even without this initiative, he says.
The Federal Council does not believe there is any financial leeway for increasing AHV benefits. It stands by its “old-age pension 2020” reform project. This is currently undergoing parliamentary consultation. This is a comprehensive package that contains the following points: the same pension age of 65 for men and women, flexible structuring of pensions, reduction in the minimum conversion rate in occupational pensions and additional funding of the AHV by increasing VAT.
The Greens are raising a topic that is central to them with their “for a green economy” initiative. The popular initiative seeks to reduce Switzerland’s environmental footprint to a sustainable level of one planet by 2050. If the whole world behaved like Switzerland, three planets would be needed. The authors believe switching to a green economy would tackle environmental issues, such as climate change, rainforest clearance and overfishing, and ensure the sustainable use of natural resources. “The throw-away economy has to become a circular one, focussing on long-life products and the recycling of waste as raw materials,” they say.
The initiative stood no chance in Parliament where it was not deemed business-friendly enough. The Bernese FDP National Councillor Christian Wasserfallen believes the Swiss economy is already green enough. He warns against “senseless and excessive regulation”. The Federal Council also rejects the initiative but put forward an indirect counter-proposal because it at least supports the general thrust. It tabled an amendment to the Environmental Protection Act with the aim of protecting resources and using them more efficiently. Federal Councillor Doris Leuthard used similar wording to the Greens during the parliamentary debate: “We must move from a throw-away society towards a circular economy.” Switzerland produces the greatest amount of refuse per capita in Europe, she said. However, the Federal Councillor’s warning went unheeded. Even an amendment to the Environmental Protection Act of 1983 was a step too far for Parliament. The Swiss people will now decide solely on the Greens’ initiative on 25 September without a counter-proposal.
|
# json-map, a tiled JSON map renderer for pyglet
# Copyright (C) 2014 Juan J. Martinez <jjm@usebox.net>
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in
# all copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
# THE SOFTWARE.
"""
A Tiled JSON map renderer for pyglet.
These classes use the JSON format as generated by Tiled JSON plugin.
`pyglet.resource` framework is used to load all the elements of the map, so
any path information must be removed from the tileset.
"""
import os
import json
import pyglet
from pyglet.graphics import OrderedGroup
from pyglet.sprite import Sprite
from pyglet import gl
__all__ = ['Map', "TileLayer", "ObjectGroup",]
def get_texture_sequence(filename, tilewidth=32, tileheight=32, margin=1, spacing=1, nearest=False):
"""Returns a texture sequence of a grid generated from a tile set."""
image = pyglet.resource.image(filename)
region = image.get_region(margin, margin, image.width-margin*2, image.height-margin*2)
grid = pyglet.image.ImageGrid(region,
int(region.height/tileheight),
int(region.width/tilewidth),
row_padding=spacing,
column_padding=spacing,
)
texture = grid.get_texture_sequence()
if nearest:
gl.glTexParameteri(texture.target, gl.GL_TEXTURE_MIN_FILTER, gl.GL_NEAREST)
gl.glTexParameteri(texture.target, gl.GL_TEXTURE_MAG_FILTER, gl.GL_NEAREST)
return texture
class BaseLayer(object):
"""
Base layer.
Takes care of the "visible" flag.
"""
# ordered group
groups = 0
def __init__(self, data, map):
self.data = data
self.map = map
if self.data["visible"]:
self.sprites = {}
self.group = OrderedGroup(BaseLayer.groups)
BaseLayer.groups += 1
class TileLayer(BaseLayer):
"""
Tile layer.
Provides a pythonic interface to the tile layer, including:
- Iterate through the tiles.
- Check if one coordinate exists in the layer.
- Get one tile of layer.
"""
def __iter__(self):
return iter(self.data)
def __contains__(self, index):
if type(index) != tuple:
raise TypeError("tuple expected")
x, y = index
return int(x+y*self.map.data["width"]) in self.data["data"]
def __getitem__(self, index):
if type(index) != tuple:
raise TypeError("tuple expected")
x, y = index
return self.data["data"][int(x+y*self.map.data["width"])]
def set_viewport(self, x, y, w, h):
tw = self.map.data["tilewidth"]
th = self.map.data["tileheight"]
def yrange(f, t, s):
while f < t:
yield f
f += s
in_use = []
for j in yrange(y, y+h+th, th):
py = j//th
for i in yrange(x, x+w+tw, tw):
px = i//tw
in_use.append((px, py))
if (px, py) not in self.sprites:
try:
texture = self.map.get_texture(self[px, py])
except (KeyError, IndexError):
self.sprites[(px, py)] = None
else:
self.sprites[(px, py)] = Sprite(texture,
x=(px*tw),
y=h-(py*th)-th,
batch=self.map.batch,
group=self.group,
usage="static",
)
# keys_to_remove = list(k for k, v in self.sprites.items() if k not in in_use)
# for key in keys_to_remove:
# if self.sprites[key] is not None:
# self.sprites[key].delete()
# del self.sprites[key]
for key in list(self.sprites.keys()):
if key not in in_use:
if self.sprites[key] is not None:
self.sprites[key].delete()
del self.sprites[key]
class ObjectGroup(BaseLayer):
"""
Object Group Layer.
Only tile based objects will be drawn (not shape based).
Provides a pythonic interface to the object layer, including:
- Iterate through the objects.
- Check if one coordinate or an object name exists in the layer.
- Get one object based on its coordinates or its name.
Also it is possible to get a list of objects of the same type with
`ObjectGroup.get_by_type(type)`.
"""
def __init__(self, data, map):
super(ObjectGroup, self).__init__(data, map)
self.h = 0
self.objects = []
self._index = {}
self._index_type = {}
self._xy_index = {}
for obj in data["objects"]:
self.objects.append(obj)
name = obj.get("name", "?")
if name not in self._index:
self._index[name] = []
otype = obj.get("type", "?")
if otype not in self._index_type:
self._index_type[otype] = []
x = int(obj["x"])//self.map.data["tilewidth"]
y = int(obj["y"])//self.map.data["tileheight"]-1
if (x, y) not in self._xy_index:
self._xy_index[x, y] = []
self._index[name].append(self.objects[-1])
self._index_type[otype].append(self.objects[-1])
self._xy_index[x, y].append(self.objects[-1])
# XXX: is this useful AT ALL?
self.objects.sort(key=lambda obj: obj["x"]+obj["y"]*self.map.data["width"])
def __iter__(self):
return iter(self.objects)
def __contains__(self, name):
if isinstance(name, tuple):
x, y = name
return (int(x), int(y)) in self._xy_index
return name in self._index
def __getitem__(self, name):
if isinstance(name, tuple):
x, y = name
# XXX: if there are several objects, expect the first one
return self._xy_index[int(x), int(y)][0]
return self._index[name]
def get_by_type(self, otype):
return self._index_type[otype]
def set_viewport(self, x, y, w, h):
self.h = h
tw = self.map.data["tilewidth"]
th = self.map.data["tileheight"]
in_use = []
for obj in self.objects:
if x-tw < obj["x"] < x+w+tw and y-th < obj["y"] < y+h+th:
if not obj["visible"]:
continue
if "gid" in obj:
in_use.append((obj["x"], obj["y"]))
try:
texture = self.map.get_texture(obj["gid"])
tileoffset = self.map.get_tileoffset(obj["gid"])
except (IndexError, KeyError):
sprite = None
else:
sprite = Sprite(texture,
x=obj["x"]+tileoffset[0],
y=self.h-obj["y"]+tileoffset[1],
batch=self.map.batch,
group=self.group,
usage="static",
)
self.sprites[(obj["x"], obj["y"])] = sprite
for key in list(self.sprites.keys()):
if key not in in_use:
self.sprites[key].delete()
del self.sprites[key]
class Tileset(object):
"""Manages a tileset and it's used internally by TileLayer."""
def __init__(self, data, nearest=False):
self.data = data
# used to convert coordinates of the grid
self.columns = (self.data["imagewidth"]-self.data["spacing"]*2)//(self.data["tilewidth"]-self.data["margin"])
self.rows = (self.data["imageheight"]-self.data["spacing"]*2)//(self.data["tileheight"]-self.data["margin"])
# the image will be accessed using pyglet resources
self.image = os.path.basename(self.data["image"])
self.texture = get_texture_sequence(self.image, self.data["tilewidth"],
self.data["tileheight"],
self.data["margin"],
self.data["spacing"],
nearest=False,
)
def __getitem__(self, index):
return self.texture[index]
def __len__(self):
return len(self.texture)
class Map(object):
"""
Load, manage and render Tiled JSON files.
Maps can created providing the JSON data to this class or using `Map.load_json()`
and after that a viewport must be set with `Map.set_viewport()`.
"""
def __init__(self, data, nearest=False):
self.data = data
self.tilesets = {} # the order is not important
self.layers = []
self.tilelayers = {}
self.objectgroups = {}
for tileset in data["tilesets"]:
self.tilesets[tileset["name"]] = Tileset(tileset, nearest)
for layer in data["layers"]:
# TODO: test this!
if layer['name'] in (self.tilelayers, self.objectgroups):
raise ValueError("Duplicated layer name %s" % layer["name"])
if layer["type"] == "tilelayer":
self.layers.append(TileLayer(layer, self))
self.tilelayers[layer["name"]] = self.layers[-1]
elif layer["type"] == "objectgroup":
self.layers.append(ObjectGroup(layer, self))
self.objectgroups[layer["name"]] = self.layers[-1]
else:
raise ValueError("unsupported layer type %s, skipping" % layer["type"])
self.batch = pyglet.graphics.Batch()
# viewport
self.x = 0
self.y = 0
self.w = 0
self.h = 0
# focus
self.fx = None
self.fy = None
# useful (size in pixels)
self.p_width = self.data["width"]*self.data["tilewidth"]
self.p_height = self.data["height"]*self.data["tileheight"]
# build a texture index converting pyglet indexing of the texture grid
# to tiled coordinate system
self.tileoffset_index = {}
self.texture_index = {}
for tileset in self.tilesets.values():
for y in range(tileset.rows):
for x in range(tileset.columns):
self.texture_index[x+y*tileset.columns+tileset.data["firstgid"]] = \
tileset[(tileset.rows-1-y),x]
# TODO: test this!
if "tileoffset" in tileset.data:
self.tileoffset_index[x+y*tileset.columns+tileset.data["firstgid"]] = \
(tileset.data["tileoffset"]["x"], tileset.data["tileoffset"]["y"])
def invalidate(self):
"""Forces a batch update of the map."""
self.set_viewport(self.x, self.y, self.w, self.h, True)
def set_viewport(self, x, y, w, h, force=False):
"""
Sets the map viewport to the screen coordinates.
Optionally the force flag can be used to update the batch even if the
viewport didn't change (this should be used via `Map.invalidate()`).
"""
# x and y can be floats
vx = max(x, 0)
vy = max(y, 0)
vx = min(vx, (self.p_width)-w)
vy = min(vy, (self.p_height)-h)
vw = int(w)
vh = int(h)
if not any([force, vx!=self.x, vy!=self.y, vw!=self.w, vh!=self.h]):
return
self.x = vx
self.y = vy
self.w = vw
self.h = vh
for layer in self.layers:
if layer.data["visible"]:
layer.set_viewport(self.x, self.y, self.w, self.h)
def set_focus(self, x, y):
"""Sets the focus in (x, y) world coordinates."""
x = int(x)
y = int(y)
if self.fx == x and self.fy == y:
return
self.fx = x
self.fy = y
vx = max(x-(self.w//2), 0)
vy = max(y-(self.h//2), 0)
if vx+(self.w//2) > self.p_width:
vx = self.p_width-self.w
if vy+(self.h//2) > self.p_height:
vy = self.p_height-self.h
self.set_viewport(vx, vy, self.w, self.h)
def world_to_screen(self, x, y):
"""
Translate world coordinate into screen coordinates.
Returns a (x, y) tuple.
"""
return x-self.x, self.h-(y-self.y)
def get_texture(self, gid):
"""
Returns a texture identified by its gid.
If not found will raise a KeyError or IndexError.
"""
return self.texture_index[gid]
def get_tileoffset(self, gid):
"""Returns the offset of a tile."""
return self.tileoffset_index.get(gid, (0, 0))
@property
def last_group(self):
"""
The last use group in `Map` batch.
This is useful in case any Sprite is added to the `Map` to
be drawn by the Map's batch without being managed by the Map.
Using this value plus one will ensure the sprite will be drawn
over the map.
"""
return BaseLayer.groups-1
@staticmethod
def load_json(fileobj, nearest=False):
"""
Load the map in JSON format.
This class method return a `Map` object and the file will be
closed after is read.
Set nearest to True to set GL_NEAREST for both min and mag
filters in the tile textures.
"""
data = json.load(fileobj)
fileobj.close()
return Map(data, nearest)
def draw(self):
"""Applies transforms and draws the batch."""
gl.glPushMatrix()
gl.glTranslatef(-self.x, self.y, 0)
self.batch.draw()
gl.glPopMatrix()
|
New LP out on the lovely Helen Scarsdale label.
Our man Millis is a Climax Golden Twin and a noted curator of globe trotting / time traveling esoterica, amongst other accolades. In the former category, Millis and Jeffery Taylor steadily release some of the most headscratching amalgamations of avant-rock, decontextualized temple music, heightened-state minimalism, and collaged field recordings this side of the Sun City Girls (including the soundtrack to the cult film Session Nine); and in the latter, Millis has published a number of acclaimed anthologies for Sublime Frequencies (Scattered Melodies, This World is Unreal Like a Snake in a Rope, Phi Ta Khon, The Crying Princess, etc.) and Dust-To-Digital (our personal favorite, aptly titled Victrola Favorites). With his fingers in so many jars of jam, it can seem like an uncommon occurrence for Millis to release solo work although he is one to smear his sticky hands all over himself in performance, installation, and collaboration. Thus, The Helen Scarsdale Agency is delighted in presenting his latest opus, Relief.
A fever dream of blurred harmonics and ethnomusicological spelunking, Relief repeatedly returns to variations on a peculiar yet beautifully serpentine drone, whose twinkling acoustic properties meld the hallucinatory mouth-music of the Bangladeshi Murung people and the curved air hypnosis of Terry Riley. Millis bookends and interrupts his mysterious miasma with comedic interludes snatched from his lauded collection of antique 78s, maudlin piano tone-clusters, and teleported crescendos of spectral ballroom waltzes. More Nurse With Wound than The Caretaker, this polyglot raga-drone of daytime somnambulism and psychedelic slipperiness speaks to the uneasy borders at psychological, cultural, and geophysical states of being. Oh, to be a human on this planet.
~ by Robert Millis on 2013.
|
from __future__ import division
import math
def cumdist(samples):
"""
Computes the cummulated distance along a sequence of vectors.
samples = [ [v11,...,v1N], ... [vn1,...,vnN] ]
"""
N, l, Ln = len(samples), [0.0], 0.0
for i in range(1,N):
Li = math.sqrt( sqL2(samples[i], samples[i-1]) )
Ln += Li
l.append(Ln)
return l, Ln
def sqL2(a, b):
"""
Computes the L2 euclidean distance between two vectors.
a = [a1,...,aN]; b = [b1,...,bN]
"""
dim, nrg = len(a), 0.0
for d in range(dim):
dist = a[d] - b[d]
nrg += dist * dist
return nrg
def clustercenter(samples):
"""
Computes the geometric center of a set of vectors.
samples = [ [v11,...,v1N], ... [vn1,...,vnN] ]
"""
N, dim = len(samples), len(samples[0])
if N == 1: # singleton cluster
return samples[0]
# Cluster center is the average in all dimensions
dsum = [0.0] * dim
for d in range(dim):
for i in range(N):
dsum[d] += samples[i][d]
dsum[d] /= N
return dsum
def whiten(samples):
"""
Divides each feature by its standard deviation across all observations,
in order to give it unit variance.
@param samples array [ [v11,...,v1N], ... [vn1,...,vnN] ]
"""
N, dim = len(samples), len(samples[0])
pts = samples[:]
for d in range(dim):
cols = [samples[i][d] for i in range(N)]
m, s = msd(cols);
if s > 0:
for i in range(N):
pts[i][d] = samples[i][d] / s
return pts
def avg(vec):
"""
Computes the average of all vector values.
vec = [v1,...,vN]
"""
return sum(vec) / len(vec)
def msd(vec):
"""
Computes the mean plus standard deviation of a vector.
vec = [v1,...,vN]
"""
mean, sd, n = avg(vec), 0.0, len(vec)
if n > 1:
for v in vec:
sd += (v - mean)**2
sd = math.sqrt(sd / (n-1))
return mean, sd
|
Brian Deveaux was recently elected to the board of trustees of the Susan L. Curtis Charitable Foundation. The mission of the Susan L. Curtis Charitable Foundation and Camp Susan Curtis is to ensure that economically disadvantaged Maine youth develop the individual character, self-confidence, and skills essential to becoming independent, contributing citizens. The organization's efforts are focused on lifting Maine youth out of the cycle of generational poverty through transformative learning — using experiential programming to transform a disadvantaged child’s perspective on life and learning, thereby redefining his or her future. The supporters who were instrumental in the creation of the Foundation have always believed every child matters. Their vision became Camp Susan Curtis, a safe haven where young spirits are nourished, young minds developed, and where people believe fully and unconditionally that the future of every Maine child has value and promise.
|
import os
from pyvcloud.vcloudair import VCA
def print_vca(vca):
if vca:
print 'vca token: ', vca.token
if vca.vcloud_session:
print 'vcloud session token: ', vca.vcloud_session.token
print 'org name: ', vca.vcloud_session.org
print 'org url: ', vca.vcloud_session.org_url
print 'organization: ', vca.vcloud_session.organization
else:
print 'vca vcloud session: ', vca.vcloud_session
else:
print 'vca: ', vca
### On Demand
host='iam.vchs.vmware.com'
username = os.environ['VCAUSER']
password = os.environ['PASSWORD']
instance = 'c40ba6b4-c158-49fb-b164-5c66f90344fa'
org = 'a6545fcb-d68a-489f-afff-2ea055104cc1'
vdc = 'VDC1'
vapp = 'ubu'
network = 'default-routed-network'
vca = VCA(host=host, username=username, service_type='ondemand', version='5.7', verify=True)
assert vca
result = vca.login(password=password)
assert result
result = vca.login_to_instance(password=password, instance=instance, token=None, org_url=None)
assert result
result = vca.login_to_instance(instance=instance, password=None, token=vca.vcloud_session.token, org_url=vca.vcloud_session.org_url)
assert result
print_vca(vca)
the_vdc = vca.get_vdc(vdc)
assert the_vdc
print the_vdc.get_name()
the_vapp = vca.get_vapp(the_vdc, vapp)
assert the_vapp
print the_vapp.me.name
the_network = vca.get_network(vdc, network)
assert the_network
# this assumes that the vApp is already connected to the network so it should return immediately with success
task = the_vapp.connect_to_network(network, the_network.get_href(), 'bridged')
print task.get_status()
assert 'success' == task.get_status()
|
Stone Deaf Fig Fumb Parametric Fuzz & Noise Gate | Lucky Fret Music – Lucky Fret Music Co.
Built on the classic Big-Muff circuit but with the addition of an often needed noise gate the Stone Deaf Fig Fumb allows you to get completely off-the-wall fuzz sounds which are also manageable.
Additionally the Stone Deaf Fig Fumb has an expression pedal input, allowing you to control a plethora of variables resulting in a foot controllable fuzz wah or fuzz phaser.
The expression pedal input only works with the Stone Deaf EP-1 expression pedal.
|
#!/usr/bin/env python
#
# This file is part of the LibreOffice project.
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
#
from __future__ import print_function
from xml.dom import minidom
import sys
def prefixForGrammar(namespace):
ns = namespace.getElementsByTagName("grammar")[0].getAttribute("ns")
return ooxUrlAliases[ns]
def parseNamespaceAliases(node):
ret = {}
for k, v in list(node.attributes.items()):
if k.startswith("xmlns:"):
ret[k.replace('xmlns:', '')] = v
return ret
def parseNamespaces(fro):
sock = open(fro)
for i in sock.readlines():
line = i.strip()
alias, url = line.split(' ')[1:]
ooxUrlAliases[url] = alias
sock.close()
def check(model):
defines = [i.getAttribute("name") for i in model.getElementsByTagName("define")]
for reference in [i.getAttribute("name") for i in model.getElementsByTagName("ref")]:
if reference not in defines:
raise Exception("Unknown define element with name '%s'" % reference)
for start in [i.getAttribute("name") for i in model.getElementsByTagName("start")]:
if start not in defines:
raise Exception("Unknown start element with name '%s'" % start)
def preprocess(model):
modelNode = [i for i in model.childNodes if i.localName == "model"][0]
# Alias -> URL, based on "xmlns:" attributes.
modelNamespaceAliases = parseNamespaceAliases(modelNode)
for i in modelNode.getElementsByTagName("namespace"):
grammarprefix = prefixForGrammar(i)
grammar = i.getElementsByTagName("grammar")[0]
for j in i.getElementsByTagName("element") + i.getElementsByTagName("attribute"):
# prefix
prefix = ""
if ":" in j.getAttribute("name"):
nameprefix = j.getAttribute("name").split(':')[0]
prefix = ooxUrlAliases[modelNamespaceAliases[nameprefix]]
elif j.localName == "attribute":
if grammar.getAttribute("attributeFormDefault") == "qualified":
prefix = grammarprefix
else:
prefix = grammarprefix
# localname
if ":" in j.getAttribute("name"):
localname = j.getAttribute("name").split(':')[1]
else:
localname = j.getAttribute("name")
# set the attributes
j.setAttribute("prefix", prefix)
j.setAttribute("localname", localname)
namespacesPath = sys.argv[1]
modelPath = sys.argv[2]
# URL -> alias, from oox
ooxUrlAliases = {}
parseNamespaces(namespacesPath)
model = minidom.parse(modelPath)
check(model)
preprocess(model)
model.writexml(sys.stdout)
# vim:set shiftwidth=4 softtabstop=4 expandtab:
|
self isEmpty ifTrue: [^Array new].
val := toPositiveIntegerBlock value: eachElem.
ans add: val] promise]) do: [:p | p value].
|
import exceptions
from types import *
class validatable:
@classmethod
def validate(cls, item):
for memberName in dir(cls):
member = getattr(cls, memberName)
if type(member) == IntType and item == member:
return item
raise ConfigError("Invalid " + str(cls.__name__) + ": " + str(item))
class action(validatable):
GET = 1
RUN = 2
RESET = 3
START = 4
class device(validatable):
VERSION = 0
ULTRASONIC_SENSOR = 1
TEMPERATURE_SENSOR = 2
LIGHT_SENSOR = 3
POTENTIONMETER = 4
JOYSTICK = 5
GYRO = 6
SOUND_SENSOR = 7
RGBLED = 8
SEVSEG = 9
MOTOR = 10
SERVO = 11
ENCODER = 12
IR = 13
PIRMOTION = 15
INFRARED = 16
LINEFOLLOWER = 17
SHUTTER = 20
LIMITSWITCH = 21
BUTTON = 22
DIGITAL = 30
ANALOG = 31
PWM = 32
SERVO_PIN = 33
TOUCH_SENSOR = 34
STEPPER = 40
ENCODER = 41
TIMER = 50
class port(validatable):
PORT_1 = 1
PORT_2 = 2
PORT_3 = 3
PORT_4 = 4
PORT_5 = 5
PORT_6 = 6
PORT_7 = 7
PORT_8 = 8
MOTOR_1 = 9
MOTOR_2 = 10
class slot(validatable):
SLOT_1 = 1
SLOT_2 = 2
class ConfigError(Exception):
def __init__(self, message):
super(ConfigError, self).__init__(message)
|
By STEPHANIE SALMONS Hawaii Tribune-Herald | Thursday, February 7, 2019, 12:05 a.m.
Pahoa resident Linda Berry steps out of the swimming pool at Pahoa Community Aquatic Center after being the first person to get in Wednesday after it was reopened.
HOLLYN JOHNSON/Tribune-Herald Lifeguard Cassandra Beccia gets a hug from Naomi Powers at the Pahoa Community Aquatic Center Wednesday after the pool’s reopening.
HOLLYN JOHNSON/Tribune-Herald A lifeguard talks to swimmers at the Pahoa Community Aquatic Center Wednesday after the pool’s reopening.
For the first time in months, swimmers glided through the water at the Pahoa Community Aquatic Center on Wednesday morning.
Others made a literal splash as they jumped off the diving board, and at the opposite end of the pool a small group began their aquatic exercises.
The pool reopened following a blessing ceremony Wednesday, nine months after it closed because of last year’s Kilauea eruption in lower Puna.
The pool’s showers served as hygiene stations for evacuees at the emergency shelter established at Pahoa District Park, and repairs were needed because of damage caused by the eruption.
The comment was met with cheers. Dozens of people gathered for the reopening, many ready with their swimming gear.
“Thank you very much for coming out on this momentous day, I’m sure everyone is very, very happy that we finally get to open doors again, and you guys get to go back into the pool,” said Parks and Recreation Director Roxcie Waltjen.
In addition to cleaning Pele’s hair, or fine threads of volcanic glass, Waltjen said cracks along the bottom of the pool and sidewalks were repaired, and “some of the major workings within the pumps” were replaced.
“But today? Today you get to swim,” she said.
Linda Berry of Pahoa was among the swimmers eager to make her way into the aquatic center and was the first to enter the water.
Her excitement was almost tangible when she spoke.
Berry said not having the pool meant a “much higher gas budget” for the commute to Hilo to swim.
“It’s just so much more relaxed here. … I don’t know how to put it, it’s just an energy here that’s not at the Hilo pool,” said Berry.
Dianne McDaniel of Orchidland said she was “ecstatic” to return to the pool, where she does water aerobics.
Having the pool closed for so long was sad, but McDaniel said she understood why.
Her grandson, Zanth McDaniel, also of Orchidland, was happy to have the pool open again and grateful for the work that went into it.
Waltjen told the Tribune-Herald in December the repairs cost $146,000, but the Federal Emergency Management Agency is expected to cover most of the expense since it’s related to the eruption.
“My happiness is 10 times greater when I see … how happy the people are to have their pool back so they can get back into their exercise routines, and, you know, bring their grandchildren down,” Waltjen said after the blessing.
It is the first of many steps that will be taken to repair Pahoa District Park’s facilities for public use following its use as a temporary evacuation center.
Waltjen said Parks and Recreation will work to re-sod the fields one at a time and also redo the gymnasium floor.
The Pahoa pool will operate on an adjusted schedule until two vacant lifeguard positions are filled. It will be open 9 a.m.-5 p.m. Tuesday through Saturday.
|
# -*- encoding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import abc
import six
from healing.openstack.common import log
from healing.openstack.common import jsonutils
from healing.objects import alarm_track as alarm_obj
LOG = log.getLogger(__name__)
STATE_OK = 'ok'
STATE_ALARM = 'alarm'
STATE_INSUFFICIENT = 'insufficient'
OK_HOOK = 'ok_actions'
ALARM_HOOK = 'alarm_actions'
INSUFFICIENT_HOOK = 'insufficient_data_actions'
CURRENT_HANDLER = None
"""
Handle AlarmObjs and ceilometer alarms. Always use this engine
to create AlarmObjs.
TODO: we can add a join table also for contract-alarm later. all the mess
is only forthe singleton alarm
I don't lke this, since we still need the obj to wrap,
but it's only for developers...
"""
class AlarmMetaClass(type):
"""Metaclass that allows tracking classes by alarm type."""
AVAILABLE_ALARMS = {}
def __init__(cls, names, bases, dict_):
AlarmMetaClass.AVAILABLE_ALARMS[cls.ALARM_TYPE] = cls
@six.add_metaclass(AlarmMetaClass)
class AlarmBase(object):
""" Some alarms will be unique , other's per VM, etc
This work as a wrapper arund AlarmTrack objects
"""
ALARM_TYPE = 'base'
def __init__(self, ctx, remote_alarm_id=None,
contract_id=None, meter="dummy",
threshold=0, period=120, operator="eq",
query=None, alarm_object=None, evaluation_period=1,
statistic='avg', **kwargs):
"""
You need to provide contract_id, meter, threshold, period and
operator if it's a new object
"""
self.ctx = ctx
# additional data base on subclass
self.options = kwargs or {}
#this is filled in some specific cases on create/update
self.extra_alarm_data = {}
if alarm_object:
self.alarm_track = alarm_object
return
# only update once if alarm_id not in place. Ex: (new alarm)
# and only if values are set to avoid exceptions on field coercion
# if will fail on save later if not properly set
self.alarm_track = alarm_obj.AlarmTrack()
self.contract_id = contract_id
self.meter = meter
self.alarm_id = remote_alarm_id
self.period = period
self.threshold = threshold
self.statistic = statistic
self.operator = operator
self.evaluation_period = evaluation_period
self.type = self.ALARM_TYPE
# this could be done by __getattr__ and __setattr__ to proxy the object,
# but.... make it explicity like this
@property
def alarm_track_id(self):
return self.alarm_track.id
@property
def alarm_id(self):
return self.alarm_track.alarm_id
@alarm_id.setter
def alarm_id(self, val):
self.alarm_track.alarm_id = val
@property
def type(self):
return self.alarm_track.type
@type.setter
def type(self, val):
self.alarm_track.type = val
@property
def contract_id(self):
return self.alarm_track.contract_id
@contract_id.setter
def contract_id(self, val):
self.alarm_track.contract_id = val
@property
def meter(self):
return self.alarm_track.meter
@meter.setter
def meter(self, val):
self.alarm_track.meter = val
@property
def threshold(self):
return self.alarm_track.threshold
@threshold.setter
def threshold(self, val):
self.alarm_track.threshold = val
@property
def operator(self):
return self.alarm_track.operator
@operator.setter
def operator(self, val):
self.alarm_track.operator = val
@property
def period(self):
return self.alarm_track.period
@period.setter
def period(self, val):
self.alarm_track.period = val
@property
def statistic(self):
return self.alarm_track.statistic
@statistic.setter
def statistic(self, val):
self.alarm_track.statistic = val
@property
def evaluation_period(self):
return self.alarm_track.evaluation_period
@evaluation_period.setter
def evaluation_period(self, val):
self.alarm_track.evaluation_period = val
@property
def query(self):
# TODO MUST: Add a json field that do this into fields.py
try:
return jsonutils.loads(self.alarm_track.query)
except:
return []
@query.setter
def query(self, val):
self.alarm_track.query = jsonutils.dumps(val)
@abc.abstractmethod
def create(self):
pass
def set_from_dict(self, update_dict):
for x in alarm_obj.AlarmTrack.fields.keys():
present = update_dict.get(x)
if present:
#to avoid change_fields being modified
setattr(self, x, present)
@abc.abstractmethod
def update(self):
pass
@abc.abstractmethod
def delete(self):
pass
def is_active(self):
return True
def get_extra_alarm_data(self):
return self.extra_alarm_data
def affected_resources(self, group_by='resource_id',
period=0, query=None,
start_date=None, end_date=None,
aggregates=None, delta_seconds=None,
meter=None,
result_process=None):
pass
def set_default_alarm_hook(self):
pass
def set_default_ok_hook(self):
pass
def set_default_insufficient_hook(self):
pass
def set_ok_hook_url(self, url):
pass
def set_alarm_hook_url(self, url):
pass
def set_insufficient_hook_url(self, url):
pass
def get_hooks(self):
pass
|
This Saturday, November 13th at 1pm, The Black Heritage Reference Center of Queens County in conjunction with BulLion Entertainment will host “A Tribute to Ralph McDaniels”, at Queens Library at The Langston Hughes Community Library and Cultural Center, 100-01 Northern Boulevard, Corona, New York.
Ralph “Uncle Ralph” McDaniels, hip-hop culture pioneer, entrepreneur, and visionary who created “Video Music Box”, the first music video show focused exclusively to an urban market-broadcast on public television.
The event will feature video presentations from Ralph McDaniels’ groundbreaking hip hop show “Video Music Box” as well as his most recent work as executive producer and host of “The Bridge”.
Ralph McDaniels will be interviewed by Mark “DJ Wiz” Eastmond, DJ/Producer, of the legendary hip-hop group Kid ‘N Play. The day would not be complete without testimonials and guest appearances from those in the music and hip-hop industry that have been a part of the life and career of “Uncle Ralph".
For more information about programs, services, locations, events and news, visit the Queens Library Web site at www.queenslibrary.org or phone 718-990-0700.
|
# -*- coding: utf-8 -*-
import datetime
from south.db import db
from south.v2 import DataMigration
from django.db import models
from udj.trans_migration_constants import ADDED_DEFAULT_ALGO_NAME
class Migration(DataMigration):
def forwards(self, orm):
totalAlgo = orm.SortingAlgorithm(
name=ADDED_DEFAULT_ALGO_NAME,
description="Sorts playlist be the total amount of votes per song",
function_name='totalVotes')
totalAlgo.save()
def backwards(self, orm):
pass
models = {
'auth.group': {
'Meta': {'object_name': 'Group'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '80'}),
'permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'})
},
'auth.permission': {
'Meta': {'ordering': "('content_type__app_label', 'content_type__model', 'codename')", 'unique_together': "(('content_type', 'codename'),)", 'object_name': 'Permission'},
'codename': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'content_type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['contenttypes.ContentType']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '50'})
},
'auth.user': {
'Meta': {'object_name': 'User'},
'date_joined': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'email': ('django.db.models.fields.EmailField', [], {'max_length': '75', 'blank': 'True'}),
'first_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'groups': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Group']", 'symmetrical': 'False', 'blank': 'True'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_active': ('django.db.models.fields.BooleanField', [], {'default': 'True'}),
'is_staff': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_superuser': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'last_login': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime.now'}),
'last_name': ('django.db.models.fields.CharField', [], {'max_length': '30', 'blank': 'True'}),
'password': ('django.db.models.fields.CharField', [], {'max_length': '128'}),
'user_permissions': ('django.db.models.fields.related.ManyToManyField', [], {'to': "orm['auth.Permission']", 'symmetrical': 'False', 'blank': 'True'}),
'username': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '30'})
},
'contenttypes.contenttype': {
'Meta': {'ordering': "('name',)", 'unique_together': "(('app_label', 'model'),)", 'object_name': 'ContentType', 'db_table': "'django_content_type'"},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'model': ('django.db.models.fields.CharField', [], {'max_length': '100'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '100'})
},
'udj.activeplaylistentry': {
'Meta': {'object_name': 'ActivePlaylistEntry'},
'adder': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'song': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['udj.LibraryEntry']"}),
'state': ('django.db.models.fields.CharField', [], {'default': "u'QE'", 'max_length': '2'}),
'time_added': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'})
},
'udj.libraryentry': {
'Meta': {'object_name': 'LibraryEntry'},
'album': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'artist': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'duration': ('django.db.models.fields.IntegerField', [], {}),
'genre': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'is_banned': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'is_deleted': ('django.db.models.fields.BooleanField', [], {'default': 'False'}),
'player': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['udj.Player']"}),
'player_lib_song_id': ('django.db.models.fields.IntegerField', [], {}),
'title': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'track': ('django.db.models.fields.IntegerField', [], {})
},
'udj.participant': {
'Meta': {'unique_together': "(('user', 'player'),)", 'object_name': 'Participant'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'player': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['udj.Player']"}),
'time_joined': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'}),
'time_last_interaction': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'auto_now_add': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'udj.player': {
'Meta': {'object_name': 'Player'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '200'}),
'owning_user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'sorting_algo': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['udj.SortingAlgorithm']"}),
'state': ('django.db.models.fields.CharField', [], {'default': "'IN'", 'max_length': '2'}),
'volume': ('django.db.models.fields.IntegerField', [], {'default': '5'})
},
'udj.playerlocation': {
'Meta': {'object_name': 'PlayerLocation'},
'address': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'city': ('django.db.models.fields.CharField', [], {'max_length': '50'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'player': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['udj.Player']", 'unique': 'True'}),
'point': ('django.contrib.gis.db.models.fields.PointField', [], {'default': "'POINT(0.0 0.0)'"}),
'state': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['udj.State']"}),
'zipcode': ('django.db.models.fields.IntegerField', [], {})
},
'udj.playerpassword': {
'Meta': {'object_name': 'PlayerPassword'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'password_hash': ('django.db.models.fields.CharField', [], {'max_length': '40'}),
'player': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['udj.Player']", 'unique': 'True'}),
'time_set': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'auto_now_add': 'True', 'blank': 'True'})
},
'udj.playlistentrytimeplayed': {
'Meta': {'object_name': 'PlaylistEntryTimePlayed'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'playlist_entry': ('django.db.models.fields.related.OneToOneField', [], {'to': "orm['udj.ActivePlaylistEntry']", 'unique': 'True'}),
'time_played': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'})
},
'udj.sortingalgorithm': {
'Meta': {'object_name': 'SortingAlgorithm'},
'description': ('django.db.models.fields.CharField', [], {'max_length': '500'}),
'function_name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '200'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '200'})
},
'udj.state': {
'Meta': {'object_name': 'State'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '2'})
},
'udj.ticket': {
'Meta': {'unique_together': "(('user', 'ticket_hash'),)", 'object_name': 'Ticket'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'ticket_hash': ('django.db.models.fields.CharField', [], {'unique': 'True', 'max_length': '32'}),
'time_issued': ('django.db.models.fields.DateTimeField', [], {'auto_now': 'True', 'blank': 'True'}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"})
},
'udj.vote': {
'Meta': {'unique_together': "(('user', 'playlist_entry'),)", 'object_name': 'Vote'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'playlist_entry': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['udj.ActivePlaylistEntry']"}),
'user': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['auth.User']"}),
'weight': ('django.db.models.fields.IntegerField', [], {})
}
}
complete_apps = ['udj']
symmetrical = True
|
A month ago, I warned that support for Windows Essentials 2012 was ending. Today, it’s gone. You can no longer download this still-useful suite of utilities.
Well, the download is gone. So. Yeah.
This is bad news on a number of levels, the most obvious being that Microsoft doesn’t have an acceptable replacement for Windows Movie Maker, which was a simple but easy-to-use video editor.
As for the other apps in the suite, most are well past their prime. OneDrive is now included in Windows, of course, and Windows Live Writer is now available in open source (and supported) form, for the few people who need or want to actually install an app to write blog posts. Windows Live Messenger was long ago replaced by Skype, which I think most will agree has not exactly gone well. Hope springs eternal.
Aside from Windows Movie Maker, the biggest loss here, however, is Windows Photo Gallery, which includes an incredible array of functionality that the Photos app in Windows 10 will never duplicate.
Despite this, Microsoft actually recommends the built-in apps in Windows as replacements. That would be hilarious if this wasn’t so bad.
|
from functionrefactor.commands import launch_case
from functionrefactor.formatter import Formatter
from functionrefactor.settings import *
from tests.common_functions import *
import json
class TestRuns():
def test_runs(self):
with open("tests/cases/test_cases.json", "r") as tcf:
test_launcher = json.load(tcf)
settings.update_global_settings(test_launcher)
test_cases = test_launcher["launcher"]
for case in test_cases:
if case["active"]:
[hpp_result, cpp_result] = launch_case("tests/cases",case)
self.check_result("tests/cases/" + case["hpp_out"],
"tests/cases/" + case["cpp_out"], hpp_result,
cpp_result)
def check_result(self, hpp_path, cpp_path, hpp_result, cpp_result):
clang_formatter = Formatter()
hpp_expected_result = clang_formatter.open_and_launch(hpp_path)
cpp_expected_result = clang_formatter.open_and_launch(cpp_path)
are_equal(hpp_result, hpp_expected_result)
are_equal(cpp_result, cpp_expected_result)
|
Am I becoming a gardener?
It is now two months since I started living in my own place and the most surpising thing I have how is how much gardening I have been doing.
It all started with the grass, which I mowed a couple of times using a borrowed lawn mower. While the edges in the backyard were easy to handle with some shears, in the front yard there was a ground cover plant blending in. I say ‘was’ as a few hours with the shears filled up my green bin and exposed a rock border.
My long term strategy is to have a garden that doesn’t annoy me. This means that the things that need doing don’t take much time or do not need doing very often. In the case of mowing the grass, I will at some stage add a concrete strip in front of the rock border to simplify future mowing.
For a week or two nothing annoyed me, until the bushes around the letterbox started to envelop the letter box and the bamboo down the side of the driveway started to rub against the car. An afternoon with shears solved those problems, but only for the time being as they will eventually grow back. The long term strategy there will be to remove the bamboo completely and put something else near the letterbox.
The outcome of this and other jobs (such as trimming back the other edge of the front lawn) is that I have been filling up my green bin every fortnight and it looks like it will continue to be filled up for many fortnights to come.
To enable this gardening I have accumulated some tools; spade, fork, shears, secateurs, gloves, broom, etc. While I did borrow a lawn mower a couple of times, I realised that I needed to get one of my own as I intend to get into the habit of keeping the grass mown.
To avoid the hassle and noise of a petrol lawn mower, I decided on electric. Granted, I did buy the cheap small electric mower from Bunnings, but so far it has worked out nicely. Although I haven’t quite worked out a good way to manage the power cable as running over it with the mower is not a good idea.
For the second year in a row I have missed writing a post for the entire month of January, the lull extending back to just before I moved into my place. A few times I started posts related to settling in, but none have been completed.
For the same reasons that I didn’t write any blog posts, I didn’t spend as much time as I should have preparing my entries that I cobbled together at the last minute.
My other two images are more recent, being from one of the short road trips that Damien and I have been taking in preparation for the three week trip to Perth and back. The trip in particular heading down the Great Ocean Road to Cape Otway and then back inland past a few waterfalls.
This is a 1.5 second exposure that is directly from the camera, which makes it all the more sweet that it was awarded a highly commended.
Unlike my other three entries this month, I did fiddle with this by changing the levels to increase the contrast and by adding a vignette. The judge did confirm a suspicion I had, there is too much vignette, something I will keep in mind for next time.
|
from core.himesis import Himesis
import uuid
class Hlayer1rule1(Himesis):
def __init__(self):
"""
Creates the himesis graph representing the DSLTrans rule layer1rule1.
"""
# Flag this instance as compiled now
self.is_compiled = True
super(Hlayer1rule1, self).__init__(name='Hlayer1rule1', num_nodes=0, edges=[])
# Set the graph attributes
self["mm__"] = ['HimesisMM']
self["name"] = """layer1rule1"""
self["GUID__"] = uuid.uuid3(uuid.NAMESPACE_DNS,'layer1rule1')
# match model. We only support one match model
self.add_node()
self.vs[0]["mm__"] = """MatchModel"""
# apply model node
self.add_node()
self.vs[1]["mm__"] = """ApplyModel"""
# paired with relation between match and apply models
self.add_node()
self.vs[2]["mm__"] = """paired_with"""
self.vs[2]["attr1"] = """layer1rule1"""
# match class ClientServerInterface(layer1rule1class0ClientServerInterface) node
self.add_node()
self.vs[3]["mm__"] = """ClientServerInterface"""
self.vs[3]["attr1"] = """+"""
# match class Operation(layer1rule1class1Operation) node
self.add_node()
self.vs[4]["mm__"] = """Operation"""
self.vs[4]["attr1"] = """+"""
# apply class StructDeclaration(layer1rule1class2StructDeclaration) node
self.add_node()
self.vs[5]["mm__"] = """StructDeclaration"""
self.vs[5]["attr1"] = """1"""
# apply class CFunctionPointerStructMember(layer1rule1class3CFunctionPointerStructMember) node
self.add_node()
self.vs[6]["mm__"] = """CFunctionPointerStructMember"""
self.vs[6]["attr1"] = """1"""
# match association ClientServerInterface--contents-->Operation node
self.add_node()
self.vs[7]["attr1"] = """contents"""
self.vs[7]["mm__"] = """directLink_S"""
# apply association StructDeclaration--members-->CFunctionPointerStructMember node
self.add_node()
self.vs[8]["attr1"] = """members"""
self.vs[8]["mm__"] = """directLink_T"""
# backward association CFunctionPointerStructMember-->Operationnode
self.add_node()
self.vs[9]["mm__"] = """backward_link"""
# backward association StructDeclaration-->ClientServerInterfacenode
self.add_node()
self.vs[10]["mm__"] = """backward_link"""
# Add the edges
self.add_edges([
(0,3), # matchmodel -> match_class ClientServerInterface(layer1rule1class0ClientServerInterface)
(0,4), # matchmodel -> match_class Operation(layer1rule1class1Operation)
(1,5), # applymodel -> apply_classStructDeclaration(layer1rule1class2StructDeclaration)
(1,6), # applymodel -> apply_classCFunctionPointerStructMember(layer1rule1class3CFunctionPointerStructMember)
(3,7), # match classClientServerInterface(layer1rule1class0ClientServerInterface) -> association contents
(7,4), # associationcontents -> match_classClientServerInterface(layer1rule1class1Operation)
(5,8), # apply class StructDeclaration(layer1rule1class2StructDeclaration) -> association members
(8,6), # associationmembers -> apply_classCFunctionPointerStructMember(layer1rule1class3CFunctionPointerStructMember)
(6,9), # apply class CFunctionPointerStructMember(layer1rule1class1Operation) -> backward_association
(9,4), # backward_associationOperation -> match_class Operation(layer1rule1class1Operation)
(5,10), # apply class StructDeclaration(layer1rule1class0ClientServerInterface) -> backward_association
(10,3), # backward_associationClientServerInterface -> match_class ClientServerInterface(layer1rule1class0ClientServerInterface)
(0,2), # matchmodel -> pairedwith
(2,1) # pairedwith -> applyModel
])
self["equations"] = []
|
Free online Volume conversion. Convert 300 dm3 to liter (cubic decimeters to l). How much is 300 dm3 to liter? Made for you with much by CalculatePlus.
|
import time
import os.path
import Trace
import scipy
__version__="01.00.00"
__author__ ="Robert Shelansky"
class TraceModel:
"""
A model/controller of data which knows how to do specific analysis and maintenance of settings etc.
Can be used as a data dump for communication with the TraceView.
"""
def __init__(self,imgres=scipy.NAN,directory=None,user=None,out_path=None,title=None,version=None,files=None,length=None,threshold=None,smooth=None,**kw):
self.settings = {
'length':length,
'threshold':threshold,
'smooth':smooth,
'end':None,
'midpoint':None}
self.context = {
'files' :files,
'version':version,
'date':time.strftime("%d/%m/%Y %H:%M:%S"),
'path':None,
'trace':None,
'coordinates':None,
'index':None,
'domain':None,
'basename':None,
'title':title,
'scale':None,
'user':user,
'resolution':None,
'steadiness':None,
'saved':['' for i in range(len(files))],
'out_path':out_path,
'imgres':imgres,
'pimgres':None,
'directory':directory}
self.molecule = {
'smoothed':None,
'fr':None,
'fl':None,
'rr':None,
'rl':None,
'zipped':None,
'zlabels':None,
'symmerty':None,
'starts':None,
'bubs':None}
self.seek(0)
self.find_midpoint()
self.analyze()
def find_midpoint(self):
self.settings['end' ]=self.context['trace'].edgebuffer(
threshold=self.settings['threshold'],
smooth =self.settings['smooth'])
self.settings['midpoint' ]=self.context['trace'].midpoint(
threshold =self.settings['threshold'],
smooth =self.settings['smooth'],
end =self.settings['end'])
def analyze(self):
midpoint,smoothed,(fr,fl),(rr,rl),(regions,labels)=self.context['trace'].solve_molecule(
self.settings['midpoint'],
self.settings['threshold'],
self.settings['smooth'],
self.settings['end'])
self.molecule['segments'] =len(fr) if len(fr) == len(rr) else float('Nan')
self.molecule['smoothed'] =smoothed
self.molecule['fr' ] =fr
self.molecule['fl' ] =fl
self.molecule['rr' ] =rr
self.molecule['rl' ] =rl
self.molecule['zipped' ] =regions
self.molecule['zlabels' ] =labels
self.molecule['rlabels' ] =self.context['trace'].label(fl,rl)
self.molecule['molecule'] =self.context['trace'].moleculify(
fr,
fl,
rr,
rl,
self.settings['length'])
self.molecule['symmetry'] =self.context['trace'].sd_segment_size (fr,fl,rr,rl) * self.context['scale']**2
self.molecule['starts' ] =self.context['trace'].msd_of_linker_end_points(fr,fl,rr,rl) * self.context['scale']**2
self.molecule['bubs' ] =self.context['trace'].msd_of_region_sizes (fr,fl,rr,rl) * self.context['scale']**2
def seek(self, index):
BASE_TO_NM_CONVERSION_FACTOR=0.34#nm/bp
self.context['path' ] =self.context['files'][index]
self.context['index'] =index
self.context['basename']=os.path.basename(self.context['path'])
##reads _trace file into scipy.array
self.context['coordinates'] = scipy.genfromtxt(
self.context['path' ],
delimiter='\t')
self.context['trace'] = Trace.Trace(self.context['coordinates'])
self.context['scale'] = self.context['trace'].scale(self.settings['length'])
self.context['resolution'] = self.context['trace']._ld.mean() * self.context['scale']
self.context['steadiness'] = scipy.sqrt(self.context['trace']._ld.var()) * self.context['scale']
self.context['pimgres'] = self.context['scale'] * BASE_TO_NM_CONVERSION_FACTOR
self.context['domain'] = scipy.array(range(len(self.context['trace'])))
def write_comments(self,file):
print("""
##Image_Resolution\t{:.2f} nm/px
##Predicted_Image_Resolution\t{:>.2f} nm/px
##Tracer\t{}
##Length\t{} bp
##Edgebuffer\t{} Coordinates
##Threshold\t{:.2f} bp
##Smooth\t{} Coordinates
##Midpoint\t{} Coordinate
##Scale\t{:.2f} bp/AU
##Resolution\t{:.2f} bp
##Steadiness\t{:.2f} bp
##Segments\t{:} #
##Symmetry\t{:.2f} bp
##Linker\t{:.2f} bp
##Region\t{:.2f} bp""".format(
self.context['imgres'],
self.context['pimgres'],
self.context['user'],
self.settings['length'],
self.settings['end'],
self.settings['threshold'] * self.context['scale'],
self.settings['smooth'],
self.settings['midpoint'],
self.context['scale'],
self.context['resolution'],
self.context['steadiness'],
self.molecule['segments'],
self.molecule['symmetry'],
self.molecule['starts'],
self.molecule['bubs']),file=file)
def save(self):
base=os.path.basename(self.context['path']).split('.')[0]
if self.context['out_path'] is not None:
path=self.context['out_path']
else:
path= os.path.dirname(self.context['path'])
if self.context['directory']:
path=path+'\\'+base
if not os.path.exists(path):
os.makedirs(path)
mol_file='{}\\{}.mol'.format(path,base)
reg_file='{}\\{}.reg'.format(path,base)
with open(mol_file, 'w') as file:
self.write_comments(file)
self.molecule['molecule'].write(file)
with open(reg_file,'w') as file:
self.write_comments(file)
reg="\n".join(['{}\t{}\t{}'.format(l,s,e) for (s,e),l in zip(self.molecule['zipped'],self.molecule['zlabels'])])
print(reg,file=file)
self.context['saved'][self.context['index']] = 'Saved.'
|
A lot people love to wear braid, especially the young ladies! And the summer is coming, the braided hairstyles become popular. Today I’d like to show you how to do a cool and stylish 5 strand braid.
The five strand braid is easy to create and is a perfect hairstyle for school, college, university and even work ( if you work in a creative place.
Hi my beautiful friends, how are you all doing? It’s Mimi here today, and today’s tutorial is going to be a five-strand braid. I know, I’m so excited to show it to you guys, so recently I did a four-strand braid and when I did it, I thought it was so hard at first and I thought I could never learn it or teach you guys, and I did, yay! And after I did the four-strand braid I thought, okay, I need to challenge myself a little more – what’s the next thing?
And of course I found out there is another braid which is more complicated and it’s a five-strand braid and you know, I watched a few tutorials and at first I couldn’t get it and I just kept trying and trying and I finally got it and it looks even more awesome than the four-strand braid. I actually do like it more than the regular braid or a fishtail braid – this is my favorite right now. I’ve been wearing my hair like this quite a lot. It’s really easy to recreate and I think it’s a perfect look that you can wear you know, back to school, back to college university – wherever you’re going or just you know, work.
So, let’s get into the tutorial. For this tutorial pretty much, we’re not going to need a lot of things. You will need a hairbrush to brush your hair. I am wearing my Luxy hair extensions in chocolate brown, 160 gram set. I just clipped them in – I haven’t bothered blending in or anything. You might also need some – not you might – you will need some hair elastics. So I’m using just the really tiny ones that are like, kind of see-through so they really blend with the hair and you can’t really see the elastic. You will or might need some hair spray – this is my favorite all natural hair spray by Intelligent Nutrients. And again, depending on what kind of look you’re going to go for I’ll explain at the end of the video, you might or might not need your curler. This is my Cortex Curler – yeah it is a Cortex – this is my Cortex Curler 1 and this barrel is a 1 inch barrel.
The first thing you want to do is just brush the hair like always. So, just – here’s a regular paddle brush. Start with the bottom and just brush it all the way up. Once you brush the hair, you want to split it into five equal sections. So you want to just grab the hair and start splitting it into five equal sections. Some people say it is easier for them to break it into two sections and then four, and five, but what I do is just grab one hand with one hand I try to you know, make kind of equal sections and then if I need to add more, add more. The key is just you know, to try and make it as equal as you can possibly can. As you can possibly can?
Alright, so I’m just going to add more here. So this is what you’re going to have once you’ve split all the hair – you’re going to have five different sections. And the key to braiding this look is kind of understanding where all the sections go. So in your left hand you will have section one and two, and in your right hand you will have section four and five or the other way around – it doesn’t really matter. And then the third section we’ll have kind of just linger in there – It’ll just be like hanging in the middle.
And all you’re going to be doing is, bring section one over two and under third, and then the same thing you’re going to do on the left hand – so you’re copying the exact same steps. So, first goes over second and under the third and I will show you now how you actually do that in practice. So, bring it over and then under, and then kind of just push it up. And then bring the first from the left over, and then under and push everything up. Over, under, and then over, under. It does really help saying over and under just so you kind of keep track of what step you’re at – over, under, over, under – and keep pushing it up for a more neat look.
Over, under, over, under, over, under, over, under, over, under, over, and under. This is where I’m going to stop. All you want to do is just grab one of those elastic bands and secure it at the end. So just secure it neatly at the end and it will look something like this, and then what you want to do is just kind of pull on the strands of the hair to make this beautiful braid even thicker. Oh, my God it looks amazing – that’s why I love this braid. There’s just nothing like it. It’s just beautiful.
You can either stop here or you can go ahead with me and do a little more romantic waves around the hair that’s sort of falling out. If you do enough layers, you’ll have a lot of hair falling out. And to make it a little prettier or romantic looking, what you can do is just grab certain sections on this side or on this side that are kind of falling out, and start curling them away. And you just want to hold them for a few seconds – not longer – and release. It’s a like, really pretty kind of wave, but I will break down with my fingers after. So just curl it, and let go. And then maybe I’ll do a little bit on the bangs as well.
And then just right away you just kind of break it with your fingers so all you have is sort of like – you don’t have any defined waves just like kind of a mess here. Like a pretty kind of mess. And then I’m going to go ahead and take a section from here and just curl it away as well. So if you do have layers, you’ll have a lot of this. If you don’t, you can just pull the strand and just kind of curl it away. This will only take a minute or two, but I think it makes the whole look a little more romantic – a little more girly. So once you’ve curled it, you can like, break it with your fingers. And the last step is just to spray the hair with the hair spray. Once you spray the hair you’re pretty much done.
This is what it should look like and you know, it’s perfect. It looks so awesome. If you wear it to school, I’m sure all the girls will be asking you how you braided your hair – or work – anywhere you’re going to go, you’re going to get peoples’ attention because you don’t get to see this kind of funky braid you know, quite often on the streets or at school – or anywhere really. I hope enjoyed this tutorial. Thank you so much for tuning in. Definitely give it a try and let me know if it worked. If it didn’t work, try the four-strand braid. Practice it, practice it, and then once you understand how the four-strand braid works, this one will be you know, honestly super easy. Thank you so much again, and I love you guys. I’ll see you in the next tutorial. Bye.
If your hair is long enough, also you can try the five strand braid headband!
Here are more hair tutorials for you, enjoy.
|
import ast
import copy
import datetime
import logging
import matplotlib
matplotlib.use('TkAgg')
#matplotlib.use('Qt4Agg')
import numpy
import ntpath
import time
import Tkinter as tk
import tkMessageBox
import os
import sys
# The Lindsay Trap: check the scripts directory is present
if not os.path.exists("./scripts/"):
print "OzFluxQC: the scripts directory is missing"
sys.exit()
# since the scripts directory is there, try importing the modules
sys.path.append('scripts')
import cfg
import qcclim
import qccpd
import qcgf
import qcio
import qcls
import qcplot
import qcrp
import qcts
import qcutils
# now check the logfiles and plots directories are present
dir_list = ["./logfiles/","./plots/"]
for item in dir_list:
if not os.path.exists(item): os.makedirs(item)
# now check the solo/inf, solo/input, solo/log and solo/output directories are present
dir_list = ["./solo/inf","./solo/input","./solo/log","./solo/output"]
for item in dir_list:
if not os.path.exists(item): os.makedirs(item)
logging.basicConfig(filename='logfiles/OzFluxQC.log',level=logging.DEBUG)
console = logging.StreamHandler()
formatter = logging.Formatter('%(asctime)s %(levelname)s %(message)s', '%H:%M:%S')
console.setFormatter(formatter)
console.setLevel(logging.INFO)
logging.getLogger('').addHandler(console)
class qcgui(tk.Tk):
"""
QC Data Main GUI
Used to access read, save, and data processing (qcls) prodecures
Columns: Data levels:
1: L1 Raw Data (read excel into NetCDF)
2: L2 QA/QC (general QA/QC algorithms, site independent)
3: L3 Corrections (Flux data corrections, site dependent based on ancillary measurements available and technical issues)
4: L4 Gap Filling (Used for fill met data gaps and ingesting SOLO-ANN Gap Filled fluxes from external processes)
Rows: function access
1: Ingest excel dataset into NetCDF files
2: Process data from previous level and generate NetCDF file(s) at current level
3-6: Show Timestamp range of dataset and accept date range for graphical plots
7: Export excel dataset from NetCDF file
"""
def __init__(self, parent):
tk.Tk.__init__(self,parent)
self.parent = parent
self.initialise()
def option_not_implemented(self):
self.do_progress(text='Option not implemented yet ...')
logging.info(' Option not implemented yet ...')
def initialise(self):
self.org_frame = tk.Frame(self)
self.org_frame.grid()
# things in the first row of the GUI
L1Label = tk.Label(self.org_frame,text='L1: Raw data')
L1Label.grid(row=0,column=0,columnspan=2)
L2Label = tk.Label(self.org_frame,text='L2: QA/QC')
L2Label.grid(row=0,column=2,columnspan=2)
L3Label = tk.Label(self.org_frame,text='L3: Process')
L3Label.grid(row=0,column=4,columnspan=2)
# things in the second row of the GUI
doL1Button = tk.Button (self.org_frame, text="Read L1 file", command=self.do_l1qc )
doL1Button.grid(row=1,column=0,columnspan=2)
doL2Button = tk.Button (self.org_frame, text="Do L2 QA/QC", command=self.do_l2qc )
doL2Button.grid(row=1,column=2,columnspan=2)
doL3Button = tk.Button (self.org_frame, text="Do L3 processing", command=self.do_l3qc )
doL3Button.grid(row=1,column=4,columnspan=2)
# things in the third row of the GUI
filestartLabel = tk.Label(self.org_frame,text='File start date')
filestartLabel.grid(row=2,column=0,columnspan=3)
fileendLabel = tk.Label(self.org_frame,text='File end date')
fileendLabel.grid(row=2,column=3,columnspan=3)
# things in the fourth row of the GUI
self.filestartValue = tk.Label(self.org_frame,text='No file loaded ...')
self.filestartValue.grid(row=3,column=0,columnspan=3)
self.fileendValue = tk.Label(self.org_frame,text='No file loaded ...')
self.fileendValue.grid(row=3,column=3,columnspan=3)
# things in the fifth row of the GUI
plotstartLabel = tk.Label(self.org_frame, text='Start date (YYYY-MM-DD)')
plotstartLabel.grid(row=4,column=0,columnspan=3)
self.plotstartEntry = tk.Entry(self.org_frame)
self.plotstartEntry.grid(row=4,column=3,columnspan=3)
# things in row sixth of the GUI
plotendLabel = tk.Label(self.org_frame, text='End date (YYYY-MM-DD)')
plotendLabel.grid(row=5,column=0,columnspan=3)
self.plotendEntry = tk.Entry(self.org_frame)
self.plotendEntry.grid(row=5,column=3,columnspan=3)
# things in the seventh row of the GUI
closeplotwindowsButton = tk.Button (self.org_frame, text="Close plot windows", command=self.do_closeplotwindows )
closeplotwindowsButton.grid(row=6,column=0,columnspan=2)
plotL1L2Button = tk.Button (self.org_frame, text="Plot L1 & L2 Data", command=self.do_plotL1L2 )
plotL1L2Button.grid(row=6,column=2,columnspan=2)
plotL3L3Button = tk.Button (self.org_frame, text="Plot L3 Data", command=self.do_plotL3L3 )
plotL3L3Button.grid(row=6,column=4,columnspan=2)
# things in the eigth row of the GUI
quitButton = tk.Button (self.org_frame, text='Quit', command=self.do_quit )
quitButton.grid(row=7,column=0,columnspan=2)
savexL2Button = tk.Button (self.org_frame, text='Write L2 Excel file', command=self.do_savexL2 )
savexL2Button.grid(row=7,column=2,columnspan=2)
savexL3Button = tk.Button (self.org_frame, text='Write L3 Excel file', command=self.do_savexL3 )
savexL3Button.grid(row=7,column=4,columnspan=2)
# other things in the GUI
self.progress = tk.Label(self.org_frame, text='Waiting for input ...')
self.progress.grid(row=8,column=0,columnspan=6,sticky="W")
# now we put together the menu, "File" first
menubar = tk.Menu(self)
filemenu = tk.Menu(menubar,tearoff=0)
filemenu.add_command(label="Concatenate netCDF",command=self.do_ncconcat)
filemenu.add_command(label="Split netCDF",command=self.do_ncsplit)
filemenu.add_command(label="List netCDF contents",command=self.option_not_implemented)
fileconvertmenu = tk.Menu(menubar,tearoff=0)
#fileconvertmenu.add_command(label="V2.7 to V2.8",command=self.do_v27tov28)
fileconvertmenu.add_command(label="nc to EddyPro (biomet)",command=self.do_nc2ep_biomet)
fileconvertmenu.add_command(label="nc to FluxNet",command=self.do_nc2fn)
fileconvertmenu.add_command(label="nc to REddyProc",command=self.do_nc2reddyproc)
fileconvertmenu.add_command(label="nc to SMAP",command=self.do_nc2smap)
fileconvertmenu.add_command(label="nc to xls",command=self.do_nc2xls)
fileconvertmenu.add_command(label="xls to nc",command=self.option_not_implemented)
filemenu.add_cascade(label="Convert",menu=fileconvertmenu)
filemenu.add_separator()
filemenu.add_command(label="Quit",command=self.do_quit)
menubar.add_cascade(label="File",menu=filemenu)
# now the "Run" menu
runmenu = tk.Menu(menubar,tearoff=0)
runmenu.add_command(label="Read L1 file",command=self.do_l1qc)
runmenu.add_command(label="Do L2 QA/QC",command=self.do_l2qc)
runmenu.add_command(label="Do L3 processing",command=self.do_l3qc)
runmenu.add_command(label="Do L4 gap fill (drivers)",command=self.do_l4qc)
runmenu.add_command(label="Do L5 gap fill (fluxes)",command=self.do_l5qc)
runmenu.add_command(label="Do L6 partitioning",command=self.do_l6qc)
menubar.add_cascade(label="Run",menu=runmenu)
# then the "Plot" menu
plotmenu = tk.Menu(menubar,tearoff=0)
plotmenu.add_command(label="Plot L1 & L2",command=self.do_plotL1L2)
plotmenu.add_command(label="Plot L3",command=self.do_plotL3L3)
plotmenu.add_command(label="Plot L4",command=self.do_plotL3L4)
plotmenu.add_command(label="Plot L5",command=self.option_not_implemented)
plotmenu.add_command(label="Plot L6 summary",command=self.do_plotL6_summary)
fnmenu = tk.Menu(menubar,tearoff=0)
fnmenu.add_command(label="Standard",command=lambda:self.do_plotfluxnet(mode="standard"))
fnmenu.add_command(label="Custom",command=lambda:self.do_plotfluxnet(mode="custom"))
plotmenu.add_cascade(label="30 minute",menu=fnmenu)
#plotmenu.add_command(label="FluxNet",command=self.do_plotfluxnet)
fpmenu = tk.Menu(menubar,tearoff=0)
fpmenu.add_command(label="Standard",command=lambda:self.do_plotfingerprint(mode="standard"))
fpmenu.add_command(label="Custom",command=lambda:self.do_plotfingerprint(mode="custom"))
plotmenu.add_cascade(label="Fingerprint",menu=fpmenu)
plotmenu.add_command(label="Quick check",command=self.do_plotquickcheck)
plotmenu.add_command(label="Years check",command=self.option_not_implemented)
plotmenu.add_separator()
plotmenu.add_command(label="Close plots",command=self.do_closeplotwindows)
menubar.add_cascade(label="Plot",menu=plotmenu)
# and the "Utilities" menu
utilsmenu = tk.Menu(menubar,tearoff=0)
climatologymenu = tk.Menu(menubar,tearoff=0)
climatologymenu.add_command(label="Standard",command=lambda:self.do_climatology(mode="standard"))
climatologymenu.add_command(label="Custom",command=lambda:self.do_climatology(mode="custom"))
utilsmenu.add_cascade(label="Climatology",menu=climatologymenu)
utilsmenu.add_command(label="Compare Ah",command=self.option_not_implemented)
utilsmenu.add_command(label="Compare EP",command=self.do_compare_eddypro)
ustarmenu = tk.Menu(menubar,tearoff=0)
ustarmenu.add_command(label="Reichstein",command=self.option_not_implemented)
ustarmenu.add_command(label="Change Point Detection",command=self.do_cpd)
utilsmenu.add_cascade(label="u* threshold",menu=ustarmenu)
menubar.add_cascade(label="Utilities",menu=utilsmenu)
# and the "Help" menu
helpmenu = tk.Menu(menubar,tearoff=0)
helpmenu.add_command(label="Contents",command=self.do_helpcontents)
helpmenu.add_command(label="About",command=self.option_not_implemented)
menubar.add_cascade(label="Help",menu=helpmenu)
self.config(menu=menubar)
def do_climatology(self,mode="standard"):
"""
Calls qcclim.climatology
"""
logging.info(' Starting climatology')
self.do_progress(text='Doing climatology ...')
if mode=="standard":
stdname = "controlfiles/standard/climatology.txt"
if os.path.exists(stdname):
cf = qcio.get_controlfilecontents(stdname)
self.do_progress(text='Opening input file ...')
filename = qcio.get_filename_dialog(path='../Sites',title='Choose a netCDF file')
if len(filename)==0:
logging.info( " Climatology: no input file chosen")
self.do_progress(text='Waiting for input ...')
return
if "Files" not in dir(cf): cf["Files"] = {}
cf["Files"]["file_path"] = ntpath.split(filename)[0]+"/"
cf["Files"]["in_filename"] = ntpath.split(filename)[1]
else:
self.do_progress(text='Loading control file ...')
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
else:
self.do_progress(text='Loading control file ...')
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
self.do_progress(text='Doing the climatology')
qcclim.climatology(cf)
self.do_progress(text='Finished climatology')
logging.info(' Finished climatology')
logging.info("")
def do_closeplotwindows(self):
"""
Close plot windows
"""
self.do_progress(text='Closing plot windows ...') # tell the user what we're doing
logging.info(' Closing plot windows ...')
matplotlib.pyplot.close('all')
#fig_numbers = [n.num for n in matplotlib._pylab_helpers.Gcf.get_all_fig_managers()]
##logging.info(' Closing plot windows: '+str(fig_numbers))
#for n in fig_numbers:
#matplotlib.pyplot.close(n)
self.do_progress(text='Waiting for input ...') # tell the user what we're doing
logging.info(' Waiting for input ...')
def do_compare_eddypro(self):
"""
Calls qcclim.compare_ep
Compares the results OzFluxQC (L3) with those from EddyPro (full output).
"""
self.do_progress(text='Comparing EddyPro and OzFlux results ...')
qcclim.compare_eddypro()
self.do_progress(text='Finished comparing EddyPro and OzFlux')
logging.info(' Finished comparing EddyPro and OzFlux')
def do_cpd(self):
"""
Calls qccpd.cpd_main
Compares the results OzFluxQC (L3) with those from EddyPro (full output).
"""
logging.info(' Starting estimation u* threshold using CPD')
self.do_progress(text='Estimating u* threshold using CPD ...')
stdname = "controlfiles/standard/cpd.txt"
if os.path.exists(stdname):
cf = qcio.get_controlfilecontents(stdname)
filename = qcio.get_filename_dialog(path='../Sites',title='Choose an input nc file')
if len(filename)==0: self.do_progress(text='Waiting for input ...'); return
if "Files" not in dir(cf): cf["Files"] = {}
cf["Files"]["file_path"] = ntpath.split(filename)[0]+"/"
cf["Files"]["in_filename"] = ntpath.split(filename)[1]
else:
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
if "Options" not in cf: cf["Options"]={}
cf["Options"]["call_mode"] = "interactive"
qccpd.cpd_main(cf)
self.do_progress(text='Finished estimating u* threshold')
logging.info(' Finished estimating u* threshold')
logging.info("")
def do_helpcontents(self):
tkMessageBox.showinfo("Obi Wan says ...","Read the source, Luke!")
def do_l1qc(self):
"""
Calls qcls.l1qc
"""
logging.info(" Starting L1 processing ...")
self.do_progress(text='Load L1 Control File ...')
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0:
logging.info( " L1: no control file chosen")
self.do_progress(text='Waiting for input ...')
return
self.do_progress(text='Doing L1 ...')
ds1 = qcls.l1qc(cf)
if ds1.returncodes["value"] == 0:
outfilename = qcio.get_outfilenamefromcf(cf)
ncFile = qcio.nc_open_write(outfilename)
qcio.nc_write_series(ncFile,ds1)
self.do_progress(text='Finished L1')
logging.info(' Finished L1')
logging.info("")
else:
msg = 'An error occurred, check the console ...'
self.do_progress(text=msg)
def do_l2qc(self):
"""
Call qcls.l2qc function
Performs L2 QA/QC processing on raw data
Outputs L2 netCDF file to ncData folder
ControlFiles:
L2_year.txt
or
L2.txt
ControlFile contents (see ControlFile/Templates/L2.txt for example):
[General]:
Enter list of functions to be performed
[Files]:
L1 input file name and path
L2 output file name and path
[Variables]:
Variable names and parameters for:
Range check to set upper and lower rejection limits
Diurnal check to reject observations by time of day that
are outside specified standard deviation limits
Timestamps for excluded dates
Timestamps for excluded hours
[Plots]:
Variable lists for plot generation
"""
logging.info(" Starting L2 processing ...")
self.do_progress(text='Load L2 Control File ...')
self.cf = qcio.load_controlfile(path='controlfiles')
if len(self.cf)==0:
logging.info( " L2: no control file chosen")
self.do_progress(text='Waiting for input ...')
return
infilename = qcio.get_infilenamefromcf(self.cf)
if not qcutils.file_exists(infilename): self.do_progress(text='An error occurred, check the console ...'); return
self.do_progress(text='Doing L2 QC ...')
self.ds1 = qcio.nc_read_series(infilename)
if len(self.ds1.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del self.ds1; return
self.update_startenddate(str(self.ds1.series['DateTime']['Data'][0]),
str(self.ds1.series['DateTime']['Data'][-1]))
self.ds2 = qcls.l2qc(self.cf,self.ds1)
logging.info(' Finished L2 QC process')
self.do_progress(text='Finished L2 QC process')
self.do_progress(text='Saving L2 QC ...') # put up the progress message
outfilename = qcio.get_outfilenamefromcf(self.cf)
if len(outfilename)==0: self.do_progress(text='An error occurred, check the console ...'); return
ncFile = qcio.nc_open_write(outfilename)
qcio.nc_write_series(ncFile,self.ds2) # save the L2 data
self.do_progress(text='Finished saving L2 QC data') # tdo_progressell the user we are done
logging.info(' Finished saving L2 QC data')
logging.info("")
def do_l3qc(self):
"""
Call qcls.l3qc_sitename function
Performs L3 Corrections and QA/QC processing on L2 data
Outputs L3 netCDF file to ncData folder
Outputs L3 netCDF file to OzFlux folder
Available corrections:
* corrections requiring ancillary measurements or samples
marked with an asterisk
Linear correction
fixed slope
linearly shifting slope
Conversion of virtual temperature to actual temperature
2D Coordinate rotation
Massman correction for frequency attenuation*
Webb, Pearman and Leuning correction for flux effects on density
measurements
Conversion of virtual heat flux to actual heat flux
Correction of soil moisture content to empirical calibration
curve*
Addition of soil heat storage to ground ground heat flux*
ControlFiles:
L3_year.txt
or
L3a.txt
ControlFile contents (see ControlFile/Templates/L3.txt for example):
[General]:
Python control parameters
[Files]:
L2 input file name and path
L3 output file name and ncData folder path
L3 OzFlux output file name and OzFlux folder path
[Massman] (where available):
Constants used in frequency attenuation correction
zmd: instrument height (z) less zero-plane displacement
height (d), m
z0: aerodynamic roughness length, m
angle: angle from CSAT mounting point between CSAT and
IRGA mid-path, degrees
CSATarm: distance from CSAT mounting point to CSAT
mid-path, m
IRGAarm: distance from CSAT mounting point to IRGA
mid-path, m
[Soil]:
Constants used in correcting Fg for storage and in empirical
corrections of soil water content
FgDepth: Heat flux plate depth, m
BulkDensity: Soil bulk density, kg/m3
OrganicContent: Soil organic content, fraction
SwsDefault
Constants for empirical corrections using log(sensor)
and exp(sensor) functions (SWC_a0, SWC_a1, SWC_b0,
SWC_b1, SWC_t, TDR_a0, TDR_a1, TDR_b0, TDR_b1,
TDR_t)
Variable and attributes lists (empSWCin, empSWCout,
empTDRin, empTDRout, linTDRin, SWCattr, TDRattr)
[Output]:
Variable subset list for OzFlux output file
[Variables]:
Variable names and parameters for:
Range check to set upper and lower rejection limits
Diurnal check to reject observations by time of day that
are outside specified standard deviation limits
Timestamps, slope, and offset for Linear correction
[Plots]:
Variable lists for plot generation
"""
logging.info(" Starting L3 processing ...")
self.cf = qcio.load_controlfile(path='controlfiles')
if len(self.cf)==0:
logging.info( " L3: no control file chosen")
self.do_progress(text='Waiting for input ...')
return
infilename = qcio.get_infilenamefromcf(self.cf)
if not qcutils.file_exists(infilename): self.do_progress(text='An error occurred, check the console ...'); return
self.ds2 = qcio.nc_read_series(infilename)
if len(self.ds2.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del self.ds2; return
self.update_startenddate(str(self.ds2.series['DateTime']['Data'][0]),
str(self.ds2.series['DateTime']['Data'][-1]))
self.do_progress(text='Doing L3 QC & Corrections ...')
self.ds3 = qcls.l3qc(self.cf,self.ds2)
self.do_progress(text='Finished L3')
txtstr = ' Finished L3: Standard processing for site: '
txtstr = txtstr+self.ds3.globalattributes['site_name'].replace(' ','')
logging.info(txtstr)
self.do_progress(text='Saving L3 QC & Corrected NetCDF data ...') # put up the progress message
outfilename = qcio.get_outfilenamefromcf(self.cf)
if len(outfilename)==0: self.do_progress(text='An error occurred, check the console ...'); return
ncFile = qcio.nc_open_write(outfilename)
outputlist = qcio.get_outputlistfromcf(self.cf,'nc')
qcio.nc_write_series(ncFile,self.ds3,outputlist=outputlist) # save the L3 data
self.do_progress(text='Finished saving L3 QC & Corrected NetCDF data') # tell the user we are done
logging.info(' Finished saving L3 QC & Corrected NetCDF data')
logging.info("")
def do_l4qc(self):
"""
Call qcls.l4qc_gapfill function
Performs L4 gap filling on L3 met data
or
Ingests L4 gap filled fluxes performed in external SOLO-ANN and c
omputes daily sums
Outputs L4 netCDF file to ncData folder
Outputs L4 netCDF file to OzFlux folder
ControlFiles:
L4_year.txt
or
L4b.txt
ControlFile contents (see ControlFile/Templates/L4.txt and
ControlFile/Templates/L4b.txt for examples):
[General]:
Python control parameters (SOLO)
Site characteristics parameters (Gap filling)
[Files]:
L3 input file name and path (Gap filling)
L4 input file name and path (SOLO)
L4 output file name and ncData folder path (both)
L4 OzFlux output file name and OzFlux folder path
[Variables]:
Variable subset list for OzFlux output file (where
available)
"""
logging.info(" Starting L4 processing ...")
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
infilename = qcio.get_infilenamefromcf(cf)
if len(infilename)==0: self.do_progress(text='An error occurred, check the console ...'); return
if not qcutils.file_exists(infilename): self.do_progress(text='An error occurred, check the console ...'); return
ds3 = qcio.nc_read_series(infilename)
if len(ds3.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del ds3; return
ds3.globalattributes['controlfile_name'] = cf['controlfile_name']
self.update_startenddate(str(ds3.series['DateTime']['Data'][0]),
str(ds3.series['DateTime']['Data'][-1]))
sitename = ds3.globalattributes['site_name']
self.do_progress(text='Doing L4 gap filling drivers: '+sitename+' ...')
if "Options" not in cf: cf["Options"]={}
cf["Options"]["call_mode"] = "interactive"
ds4 = qcls.l4qc(cf,ds3)
if ds4.returncodes["alternate"]=="quit" or ds4.returncodes["solo"]=="quit":
self.do_progress(text='Quitting L4: '+sitename)
logging.info(' Quitting L4: '+sitename)
else:
self.do_progress(text='Finished L4: '+sitename)
logging.info(' Finished L4: '+sitename)
self.do_progress(text='Saving L4 gap filled data ...') # put up the progress message
outfilename = qcio.get_outfilenamefromcf(cf)
if len(outfilename)==0: self.do_progress(text='An error occurred, check the console ...'); return
ncFile = qcio.nc_open_write(outfilename)
outputlist = qcio.get_outputlistfromcf(cf,'nc')
qcio.nc_write_series(ncFile,ds4,outputlist=outputlist) # save the L4 data
self.do_progress(text='Finished saving L4 gap filled data') # tell the user we are done
logging.info(' Finished saving L4 gap filled data')
logging.info("")
def do_l5qc(self):
"""
Call qcls.l5qc function to gap fill the fluxes.
"""
logging.info(" Starting L5 processing ...")
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
infilename = qcio.get_infilenamefromcf(cf)
if len(infilename)==0: self.do_progress(text='An error occurred, check the console ...'); return
if not qcutils.file_exists(infilename): self.do_progress(text='An error occurred, check the console ...'); return
ds4 = qcio.nc_read_series(infilename)
if len(ds4.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del ds4; return
ds4.globalattributes['controlfile_name'] = cf['controlfile_name']
self.update_startenddate(str(ds4.series['DateTime']['Data'][0]),
str(ds4.series['DateTime']['Data'][-1]))
sitename = ds4.globalattributes['site_name']
self.do_progress(text='Doing L5 gap filling fluxes: '+sitename+' ...')
if "Options" not in cf: cf["Options"]={}
cf["Options"]["call_mode"] = "interactive"
ds5 = qcls.l5qc(cf,ds4)
if ds5.returncodes["solo"]=="quit":
self.do_progress(text='Quitting L5: '+sitename)
logging.info(' Quitting L5: '+sitename)
else:
self.do_progress(text='Finished L5: '+sitename)
logging.info(' Finished L5: '+sitename)
self.do_progress(text='Saving L5 gap filled data ...') # put up the progress message
outfilename = qcio.get_outfilenamefromcf(cf)
if len(outfilename)==0: self.do_progress(text='An error occurred, check the console ...'); return
ncFile = qcio.nc_open_write(outfilename)
outputlist = qcio.get_outputlistfromcf(cf,'nc')
qcio.nc_write_series(ncFile,ds5,outputlist=outputlist) # save the L5 data
self.do_progress(text='Finished saving L5 gap filled data') # tell the user we are done
logging.info(' Finished saving L5 gap filled data')
logging.info("")
def do_l6qc(self):
"""
Call qcls.l6qc function to partition NEE into GPP and ER.
"""
logging.info(" Starting L6 processing ...")
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
infilename = qcio.get_infilenamefromcf(cf)
if len(infilename)==0: self.do_progress(text='An error occurred, check the console ...'); return
if not qcutils.file_exists(infilename): self.do_progress(text='An error occurred, check the console ...'); return
ds5 = qcio.nc_read_series(infilename)
if len(ds5.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del ds5; return
ds5.globalattributes['controlfile_name'] = cf['controlfile_name']
self.update_startenddate(str(ds5.series['DateTime']['Data'][0]),
str(ds5.series['DateTime']['Data'][-1]))
sitename = ds5.globalattributes['site_name']
self.do_progress(text='Doing L6 partitioning: '+sitename+' ...')
if "Options" not in cf: cf["Options"]={}
cf["Options"]["call_mode"] = "interactive"
ds6 = qcls.l6qc(cf,ds5)
self.do_progress(text='Finished L6: '+sitename)
logging.info(' Finished L6: '+sitename)
self.do_progress(text='Saving L6 partitioned data ...') # put up the progress message
outfilename = qcio.get_outfilenamefromcf(cf)
if len(outfilename)==0: self.do_progress(text='An error occurred, check the console ...'); return
ncFile = qcio.nc_open_write(outfilename)
outputlist = qcio.get_outputlistfromcf(cf,'nc')
qcio.nc_write_series(ncFile,ds6,outputlist=outputlist) # save the L6 data
self.do_progress(text='Finished saving L6 partitioned data') # tell the user we are done
logging.info(' Finished saving L6 partitioned data')
logging.info("")
def do_nc2ep_biomet(self):
""" Calls qcio.ep_biomet_write_csv. """
logging.info(' Starting conversion to EddyPro biomet file')
self.do_progress(text='Load control file ...')
self.cf = qcio.load_controlfile(path='controlfiles')
if len(self.cf)==0: self.do_progress(text='Waiting for input ...'); return
self.do_progress(text='Converting nc to EddyPro biomet CSV ...')
return_code = qcio.ep_biomet_write_csv(self.cf)
if return_code==0:
self.do_progress(text='An error occurred, check the console ...');
return
else:
logging.info(' Finished conversion to EddyPro biomet format')
self.do_progress(text='Finished conversion to EddyPro biomet format')
logging.info("")
def do_nc2fn(self):
""" Calls qcio.fn_write_csv. """
logging.info(' Starting conversion to FluxNet CSV file')
self.do_progress(text='Load control file ...')
self.cf = qcio.load_controlfile(path='controlfiles')
if len(self.cf)==0: self.do_progress(text='Waiting for input ...'); return
self.do_progress(text='Converting nc to FluxNet CSV ...')
qcio.fn_write_csv(self.cf)
logging.info(' Finished conversion')
self.do_progress(text='Finished conversion')
logging.info("")
def do_nc2reddyproc(self):
""" Calls qcio.reddyproc_write_csv."""
logging.info(' Starting conversion to REddyProc CSV file')
self.do_progress(text="Choosing netCDF file ...")
ncfilename = qcio.get_filename_dialog(path="../Sites",title="Choose a netCDF file")
if len(ncfilename)==0 or not os.path.exists(ncfilename):
self.do_progress(text="Waiting for input ..."); return
self.do_progress(text='Converting nc to REddyProc CSV ...')
qcio.reddyproc_write_csv(ncfilename)
logging.info(' Finished conversion')
self.do_progress(text='Finished conversion')
logging.info("")
def do_nc2smap(self):
""" Calls qcio.smap_write_csv. """
logging.info(' Starting conversion to SMAP CSV file')
self.do_progress(text='Load control file ...')
self.cf = qcio.load_controlfile(path='controlfiles')
if len(self.cf)==0: self.do_progress(text='Waiting for input ...'); return
self.do_progress(text='Converting nc to SMAP CSV ...')
qcio.smap_write_csv(self.cf)
logging.info(' Finished conversion')
self.do_progress(text='Finished conversion')
logging.info("")
def do_nc2xls(self):
""" Calls qcio.nc_2xls. """
logging.info(" Starting conversion to Excel file")
self.do_progress(text="Choosing netCDF file ...")
ncfilename = qcio.get_filename_dialog(path="../Sites",title="Choose a netCDF file")
if len(ncfilename)==0: self.do_progress(text="Waiting for input ..."); return
self.do_progress(text="Converting netCDF file to Excel file")
qcio.nc_2xls(ncfilename,outputlist=None)
self.do_progress(text="Finished converting netCDF file")
logging.info(" Finished converting netCDF file")
logging.info("")
def do_ncconcat(self):
"""
Calls qcio.nc_concatenate
"""
logging.info(' Starting concatenation of netCDF files')
self.do_progress(text='Loading control file ...')
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
self.do_progress(text='Concatenating files')
qcio.nc_concatenate(cf)
self.do_progress(text='Finished concatenating files')
logging.info(' Finished concatenating files')
logging.info("")
def do_ncsplit(self):
"""
Calls qcio.nc_split
"""
logging.info(' Starting split of netCDF file')
self.do_progress(text='Splitting file')
qcio.nc_split()
self.do_progress(text='Finished splitting file')
logging.info(' Finished splitting file')
logging.info("")
def do_plotfingerprint(self,mode="standard"):
""" Plot fingerprint"""
logging.info(' Starting fingerprint plot')
self.do_progress(text='Doing fingerprint plot ...')
if mode=="standard":
stdname = "controlfiles/standard/fingerprint.txt"
if os.path.exists(stdname):
cf = qcio.get_controlfilecontents(stdname)
filename = qcio.get_filename_dialog(path='../Sites',title='Choose a netCDF file')
if len(filename)==0 or not os.path.exists(filename):
self.do_progress(text='Waiting for input ...'); return
if "Files" not in dir(cf): cf["Files"] = {}
cf["Files"]["file_path"] = ntpath.split(filename)[0]+"/"
cf["Files"]["in_filename"] = ntpath.split(filename)[1]
else:
self.do_progress(text='Loading control file ...')
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
else:
self.do_progress(text='Loading control file ...')
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
if "Options" not in cf: cf["Options"]={}
cf["Options"]["call_mode"] = "interactive"
self.do_progress(text='Plotting fingerprint ...')
qcplot.plot_fingerprint(cf)
self.do_progress(text='Finished plotting fingerprint')
logging.info(' Finished plotting fingerprint')
logging.info("")
def do_plotfluxnet(self,mode="standard"):
""" Plot FluxNet style time series of data."""
self.do_progress(text='Doing FluxNet plots ...')
if mode=="standard":
stdname = "controlfiles/standard/fluxnet.txt"
if os.path.exists(stdname):
cf = qcio.get_controlfilecontents(stdname)
filename = qcio.get_filename_dialog(path='../Sites',title='Choose a netCDF file')
if len(filename)==0 or not os.path.exists(filename):
self.do_progress(text='Waiting for input ...'); return
if "Files" not in dir(cf): cf["Files"] = {}
cf["Files"]["file_path"] = ntpath.split(filename)[0]+"/"
cf["Files"]["in_filename"] = ntpath.split(filename)[1]
else:
self.do_progress(text='Loading control file ...')
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
else:
self.do_progress(text='Loading control file ...')
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
self.do_progress(text='Plotting FluxNet style plots ...')
qcplot.plot_fluxnet(cf)
self.do_progress(text='Finished FluxNet plotting')
logging.info(' Finished FluxNet plotting')
def do_plotquickcheck(self):
""" Plot quickcheck"""
self.do_progress(text='Loading control file ...')
stdname = "controlfiles/standard/quickcheck.txt"
if os.path.exists(stdname):
cf = qcio.get_controlfilecontents(stdname)
filename = qcio.get_filename_dialog(path='../Sites',title='Choose an input file')
if len(filename)==0: self.do_progress(text='Waiting for input ...'); return
if "Files" not in dir(cf): cf["Files"] = {}
cf["Files"]["file_path"] = ntpath.split(filename)[0]+"/"
cf["Files"]["in_filename"] = ntpath.split(filename)[1]
else:
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0: self.do_progress(text='Waiting for input ...'); return
self.do_progress(text='Plotting quickcheck ...')
qcplot.plot_quickcheck(cf)
self.do_progress(text='Finished plotting quickcheck')
logging.info(' Finished plotting quickcheck')
def do_plotL1L2(self):
"""
Plot L1 (raw) and L2 (QA/QC) data in blue and red, respectively
Control File for do_l2qc function used.
If L2 Control File not loaded, requires control file selection.
"""
if 'ds1' not in dir(self) or 'ds2' not in dir(self):
self.cf = qcio.load_controlfile(path='controlfiles')
if len(self.cf)==0: self.do_progress(text='Waiting for input ...'); return
l1filename = qcio.get_infilenamefromcf(self.cf)
if not qcutils.file_exists(l1filename): self.do_progress(text='An error occurred, check the console ...'); return
self.ds1 = qcio.nc_read_series(l1filename)
if len(self.ds1.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del self.ds1; return
l2filename = qcio.get_outfilenamefromcf(self.cf)
self.ds2 = qcio.nc_read_series(l2filename)
if len(self.ds2.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del self.ds2; return
self.update_startenddate(str(self.ds1.series['DateTime']['Data'][0]),
str(self.ds1.series['DateTime']['Data'][-1]))
self.do_progress(text='Plotting L1 & L2 QC ...')
cfname = self.ds2.globalattributes['controlfile_name']
self.cf = qcio.get_controlfilecontents(cfname)
for nFig in self.cf['Plots'].keys():
si = qcutils.GetDateIndex(self.ds1.series['DateTime']['Data'],self.plotstartEntry.get(),
ts=self.ds1.globalattributes['time_step'],default=0,match='exact')
ei = qcutils.GetDateIndex(self.ds1.series['DateTime']['Data'],self.plotendEntry.get(),
ts=self.ds1.globalattributes['time_step'],default=-1,match='exact')
plt_cf = self.cf['Plots'][str(nFig)]
if 'Type' in plt_cf.keys():
if str(plt_cf['Type']).lower() =='xy':
self.do_progress(text='Plotting L1 and L2 XY ...')
qcplot.plotxy(self.cf,nFig,plt_cf,self.ds1,self.ds2,si,ei)
else:
self.do_progress(text='Plotting L1 and L2 QC ...')
qcplot.plottimeseries(self.cf,nFig,self.ds1,self.ds2,si,ei)
else:
self.do_progress(text='Plotting L1 and L2 QC ...')
qcplot.plottimeseries(self.cf,nFig,self.ds1,self.ds2,si,ei)
self.do_progress(text='Finished plotting L1 and L2')
logging.info(' Finished plotting L1 and L2, check the GUI')
def do_plotL3L3(self):
"""
Plot L3 (QA/QC and Corrected) data
Control File for do_l3qc function used.
If L3 Control File not loaded, requires control file selection.
"""
if 'ds3' not in dir(self):
self.cf = qcio.load_controlfile(path='controlfiles')
if len(self.cf)==0: self.do_progress(text='Waiting for input ...'); return
l3filename = qcio.get_outfilenamefromcf(self.cf)
self.ds3 = qcio.nc_read_series(l3filename)
if len(self.ds3.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del self.ds3; return
self.update_startenddate(str(self.ds3.series['DateTime']['Data'][0]),
str(self.ds3.series['DateTime']['Data'][-1]))
self.do_progress(text='Plotting L3 QC ...')
cfname = self.ds3.globalattributes['controlfile_name']
self.cf = qcio.get_controlfilecontents(cfname)
for nFig in self.cf['Plots'].keys():
si = qcutils.GetDateIndex(self.ds3.series['DateTime']['Data'],self.plotstartEntry.get(),
ts=self.ds3.globalattributes['time_step'],default=0,match='exact')
ei = qcutils.GetDateIndex(self.ds3.series['DateTime']['Data'],self.plotendEntry.get(),
ts=self.ds3.globalattributes['time_step'],default=-1,match='exact')
plt_cf = self.cf['Plots'][str(nFig)]
if 'Type' in plt_cf.keys():
if str(plt_cf['Type']).lower() =='xy':
self.do_progress(text='Plotting L3 XY ...')
qcplot.plotxy(self.cf,nFig,plt_cf,self.ds3,self.ds3,si,ei)
else:
self.do_progress(text='Plotting L3 QC ...')
SeriesList = ast.literal_eval(plt_cf['Variables'])
qcplot.plottimeseries(self.cf,nFig,self.ds3,self.ds3,si,ei)
else:
self.do_progress(text='Plotting L3 QC ...')
qcplot.plottimeseries(self.cf,nFig,self.ds3,self.ds3,si,ei)
self.do_progress(text='Finished plotting L3')
logging.info(' Finished plotting L3, check the GUI')
def do_plotL3L4(self):
"""
Plot L3 (QA/QC and Corrected) and L4 (Gap Filled) data in blue and
red, respectively
Control File for do_l4qc function used.
If L4 Control File not loaded, requires control file selection.
"""
if 'ds3' not in dir(self) or 'ds4' not in dir(self):
self.cf = qcio.load_controlfile(path='controlfiles')
if len(self.cf)==0:
self.do_progress(text='Waiting for input ...')
return
l3filename = qcio.get_infilenamefromcf(self.cf)
if not qcutils.file_exists(l3filename): self.do_progress(text='An error occurred, check the console ...'); return
self.ds3 = qcio.nc_read_series(l3filename)
if len(self.ds3.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del self.ds3; return
l4filename = qcio.get_outfilenamefromcf(self.cf)
self.ds4 = qcio.nc_read_series(l4filename)
if len(self.ds4.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del self.ds4; return
self.update_startenddate(str(self.ds3.series['DateTime']['Data'][0]),
str(self.ds3.series['DateTime']['Data'][-1]))
self.do_progress(text='Plotting L3 and L4 QC ...')
cfname = self.ds4.globalattributes['controlfile_name']
self.cf = qcio.get_controlfilecontents(cfname)
for nFig in self.cf['Plots'].keys():
si = qcutils.GetDateIndex(self.ds3.series['DateTime']['Data'],self.plotstartEntry.get(),
ts=self.ds3.globalattributes['time_step'],default=0,match='exact')
ei = qcutils.GetDateIndex(self.ds3.series['DateTime']['Data'],self.plotendEntry.get(),
ts=self.ds3.globalattributes['time_step'],default=-1,match='exact')
qcplot.plottimeseries(self.cf,nFig,self.ds3,self.ds4,si,ei)
self.do_progress(text='Finished plotting L4')
logging.info(' Finished plotting L4, check the GUI')
def do_plotL4L5(self):
"""
Plot L4 (Gap filled) and L5 (Partitioned) data.
"""
pass
def do_plotL6_summary(self):
"""
Plot L6 summary.
"""
cf = qcio.load_controlfile(path='controlfiles')
if len(cf)==0:
self.do_progress(text='Waiting for input ...')
return
if "Options" not in cf: cf["Options"]={}
cf["Options"]["call_mode"] = "interactive"
l6filename = qcio.get_outfilenamefromcf(cf)
if not qcutils.file_exists(l6filename): self.do_progress(text='An error occurred, check the console ...'); return
ds6 = qcio.nc_read_series(l6filename)
if len(ds6.series.keys())==0: self.do_progress(text='An error occurred, check the console ...'); del ds6; return
self.update_startenddate(str(ds6.series['DateTime']['Data'][0]),
str(ds6.series['DateTime']['Data'][-1]))
self.do_progress(text='Plotting L6 summary ...')
qcgf.ImportSeries(cf,ds6)
qcrp.L6_summary(cf,ds6)
self.do_progress(text='Finished plotting L6 summary')
logging.info(' Finished plotting L6 summary, check the GUI')
def do_progress(self,text):
"""
Update progress message in QC Data GUI
"""
self.progress.destroy()
self.progress = tk.Label(self.org_frame, text=text)
self.progress.grid(row=8,column=0,columnspan=6,sticky="W")
self.update()
def do_quit(self):
"""
Close plot windows and quit QC Data GUI
"""
self.do_progress(text='Closing plot windows ...') # tell the user what we're doing
logging.info(' Closing plot windows ...')
matplotlib.pyplot.close('all')
self.do_progress(text='Quitting ...') # tell the user what we're doing
logging.info(' Quitting ...')
self.quit()
def do_savexL2(self):
"""
Call nc2xl function
Exports excel data from NetCDF file
Outputs L2 Excel file containing Data and Flag worksheets
"""
self.do_progress(text='Exporting L2 NetCDF -> Xcel ...') # put up the progress message
# get the output filename
outfilename = qcio.get_outfilenamefromcf(self.cf)
# get the output list
outputlist = qcio.get_outputlistfromcf(self.cf,'xl')
qcio.nc_2xls(outfilename,outputlist=outputlist)
self.do_progress(text='Finished L2 Data Export') # tell the user we are done
logging.info(' Finished saving L2 data')
def do_savexL3(self):
"""
Call nc2xl function
Exports excel data from NetCDF file
Outputs L3 Excel file containing Data and Flag worksheets
"""
self.do_progress(text='Exporting L3 NetCDF -> Xcel ...') # put up the progress message
# get the output filename
outfilename = qcio.get_outfilenamefromcf(self.cf)
# get the output list
outputlist = qcio.get_outputlistfromcf(self.cf,'xl')
qcio.nc_2xls(outfilename,outputlist=outputlist)
self.do_progress(text='Finished L3 Data Export') # tell the user we are done
logging.info(' Finished saving L3 data')
def do_xl2nc(self):
"""
Calls qcio.xl2nc
"""
logging.info(" Starting L1 processing ...")
self.do_progress(text='Loading control file ...')
self.cf = qcio.load_controlfile(path='controlfiles')
if len(self.cf)==0: self.do_progress(text='Waiting for input ...'); return
self.do_progress(text='Reading Excel file & writing to netCDF')
rcode = qcio.xl2nc(self.cf,"L1")
if rcode==1:
self.do_progress(text='Finished writing to netCDF ...')
logging.info(' Finished writing to netCDF ...')
else:
self.do_progress(text='An error occurred, check the console ...')
def update_startenddate(self,startstr,endstr):
"""
Read start and end timestamps from data and report in QC Data GUI
"""
self.filestartValue.destroy()
self.fileendValue.destroy()
self.filestartValue = tk.Label(self.org_frame,text=startstr)
self.filestartValue.grid(row=3,column=0,columnspan=3)
self.fileendValue = tk.Label(self.org_frame,text=endstr)
self.fileendValue.grid(row=3,column=3,columnspan=3)
self.update()
if __name__ == "__main__":
#log = qcutils.startlog('qc','logfiles/qc.log')
qcGUI = qcgui(None)
main_title = cfg.version_name+' Main GUI '+cfg.version_number
qcGUI.title(main_title)
qcGUI.mainloop()
qcGUI.destroy()
logging.info('QC: All done')
|
Some innate force propels me when someone tells me I can't do something. It sets a chain reaction prompting me to fulfill this new desire. I remember when I first began camping with my boys. I was an older parent and my children were quite young. In the beginning I went with friends relishing the closeness of comradery incase the inevitable would happen.
But soon, due to other expectations, my friend's list dwindled and I was left alone. Fearing I may not take the annual camping trip I decided to set out alone with my two boys, the oldest not yet eight. Oh, and how I was met with such displeasure from family and friends "Are you insane? You can't go alone for 14 days on the road yourself!"
"You are not thinking clearly. Then you can't go to remote places any longer."
"Please tell me you are not thinking of doing this again this year with a car that barely runs."
But these helpful voices only made me more determined to not undermine the open highway before me. I was destined to become a summer traveler enjoying God's country as I puttered along in tow with my camper. Did I ever regret it when the inevitable occurred and I was forced to find service stations? Never! Not once would I turn back or forfeit the next future adventure. I knew God protected me always against my "giants".
Today's verse takes place at the point during the Exodus of Israel when the tribes were camped in the desert preparing to take possession of what God had promised. Twelve men, including Joshua and Caleb, were sent by Moses to reconnoiter the Promised Land.
When the spies returned from quietly observing their new undertakings, all twelve reported that it was a good land and well worth it. Flowing with milk and honey, none of these travelers would ever want for again. But ten of the twelve did not have the faith to be brave. They were not by nature as adventurous as Caleb and Joshua. They told the people that there were giants in the land, and that the Israelites could not defeat them.
This incident reminds me of what we often face in life with our faith. We think God might be asking us to take a risk. We carefully analyze the situation only to discover there are "giants" in this new endeavor and opt out silently.
You need to understand that God is bigger than these giants in your life. God wants you to succeed and needs you to listen to your faith and take risks, even if you are emotionally disconnected. What iore unnerving is looking back and not making the attempt. Then spending your life wandering in the wilderness of "I wish I had" for enternity. God wants you to be like Caleb and say, "Yes, I can do this with your help, Lord."
|
import itertools
import re
"""
# parameter sweep
input: ['trim_sort', 'kiki', '?k=29-30']
output: [['trim_sort', 'kiki', '?k=29], ['trim_sort', 'kiki', '?k=30]]
# parameter sweep with multiple assemblers
input: ['trim_sort', 'kiki ?k=29-30 velvet']
output: [['trim_sort', 'kiki', '?k=29], ['trim_sort', 'kiki', '?k=30], ['trim_sort', 'velvet']]
# binary parameter sweep
CLI: trim_sort kiki ?k=29-30 ?cov=20-21
input: ['trim_sort', 'kiki', '?k=29-30', ?cov=20-21]
output: [['trim_sort', 'kiki', '?k=29, '?cov=20], ['trim_sort', 'kiki', '?k=30, '?cov=20],
['trim_sort', 'kiki', '?k=29, '?cov=21], ['trim_sort', 'kiki', '?k=30, '?cov=21]]
"""
#my_pipe = ['trim_sort', '?length=10-11', 'kiki ?k=29-30 ?cov=29-30']
my_pipe = ['ma', '?k=1,5,3', 'b']
#my_pipe = ['a' , 'b ?k=1,10-11,20,30:40:2']
test=['sga_preprocess', '?min_length=29,100,150','sga_ec', 'tagdust',
'velvet ?hash_length=31:39:2 idba']
def parse_pipe(pipe):
"""
Parses modules and parameters into stages
Input: a flat (no quotes) list of modules and params
e.g. ['kiki', '?k=29-30', 'velvet']
Output: list of lists containing single modules and
parameters
e.g. [['kiki', '?k=29'], ['kiki', '?k=30'], ['velvet']]
"""
module = []
stages = []
for string in pipe:
if not string.startswith('?'):
if module:
stages.append(module) #flush
module = [string]
else:
module.append(string)
else:
module.append(string) #param
if module:
stages.append(module)
stages = [expand_sweep(m) for m in stages]
return stages
def parse_branches(pipe):
stages = []
flat_pipe = []
for i in range(len(pipe)):
if len(pipe[i].split(' ')) == 1:
flat_pipe.append(pipe[i])
try:
if len(pipe[i+1].split(' ')) != 1:
stages += parse_pipe(flat_pipe)
flat_pipe = []
except:
stages += parse_pipe(flat_pipe)
flat_pipe = []
else: # parenth
stages += [list(itertools.chain(*parse_pipe(pipe[i].split(' '))))]
cart = [list(itertools.product(*stages))]
all_pipes = []
for pipe in cart[0]:
all_pipes.append(list(itertools.chain(*pipe)))
#### Remove occurences of 'none'
for i, p in enumerate(all_pipes):
all_pipes[i] = [mod for mod in p if mod != 'none']
return all_pipes
def expand_sweep(module):
"""
[m, ?p=1-2, ?p=3-4] -> [m, p1, p3, m, p2, p3, m, p1, p4, m, p1, p4]
"""
expanded = []
has_range = False
for word in module:
if word.startswith('?'):
f = re.split('\?|=', word)[1:]
flag = f[0]
params = f[1]
sweep = []
for param in params.split(','):
s = re.split('-|:', param)
if len(s) != 1: #is range
has_range = True
delim = s[0].find('=')+1
if delim == 1:
break
srange = (int(s[0][delim:]),int(s[1]))
step_size = 1
if len(s) == 3:
step_size = int(s[2])
sweep += ['?{}={}'.format(flag, x)
for x in range(
srange[0], srange[1]+1, step_size)]
else:
sweep.append('?{}={}'.format(flag, s[0]))
has_range = True
expanded.append(sweep)
else: #mod name
expanded.append([word])
if has_range:
cart = [list(itertools.product(*expanded))]
flat = list(itertools.chain(*cart))
return flat
else:
return [module]
#print parse_branches(my_pipe)
|
Cathy has worked in the legal field as a legal secretary all of her career. Her career has focused on many different areas of the law, with her primary focus now on assisting estate planning attorneys. As a legal assistant at Legacy Law Group, Cathy’s role is to be the first point of contact with our clients, as she greets clients and answers their calls. She assists with various roles throughout the firm, including assisting the paralegals and attorneys as needed. She thoroughly enjoys working with people and plays a key role in keeping our office functioning smoothly.
She lives in Appleton with her cat Olivia and has lived in Appleton most of her life. She enjoys going to her place up north during the summer months to enjoy the outdoors. She enjoys spending time with her family, who all live in the Fox Valley area.
|
# Assuming all functions begin with ')' followed by '{', just find the matching brace and
# add a line with 'g_pVCR->SyncToken("<random string here>");'
import dlexer
import sys
class BlankStruct:
pass
def MatchParensBack( list, iStart ):
parenCount = -1
for i in range( 0, iStart ):
if list[iStart-i].id == __TOKEN_OPENPAREN:
parenCount += 1
elif list[iStart-i].id == __TOKEN_CLOSEPAREN:
parenCount -= 1
if parenCount == 0:
return iStart - i
return -1
if len( sys.argv ) >= 2:
# Setup the parser.
parser = dlexer.DLexer( 0 )
__TOKEN_NEWLINE = parser.AddToken( '\n' )
__TOKEN_WHITESPACE = parser.AddToken( '[ \\t\\f\\v]+' )
__TOKEN_OPENBRACE = parser.AddToken( '{' )
__TOKEN_CLOSEBRACE = parser.AddToken( '}' )
__TOKEN_OPENPAREN = parser.AddToken( '\(' )
__TOKEN_CLOSEPAREN = parser.AddToken( '\)' )
__TOKEN_COMMENT = parser.AddToken( r"\/\/.*" )
__TOKEN_CONST = parser.AddToken( "const" )
__TOKEN_IF = parser.AddToken( "if" )
__TOKEN_WHILE = parser.AddToken( "while" )
__TOKEN_FOR = parser.AddToken( "for" )
__TOKEN_SWITCH = parser.AddToken( "switch" )
validChars = r"\~\@\#\$\%\^\&\!\,\w\.-/\[\]\<\>\""
__TOKEN_IDENT = parser.AddToken( '[' + validChars + ']+' )
__TOKEN_OPERATOR = parser.AddToken( "\=|\+" )
__TOKEN_SCOPE_OPERATOR = parser.AddToken( "::" )
__TOKEN_IGNORE = parser.AddToken( r"\#|\;|\:|\||\?|\'|\\|\*|\-|\`" )
head = None
# First, read all the tokens into a list.
list = []
parser.BeginReadFile( sys.argv[1] )
while 1:
m = parser.GetToken()
if m:
list.append( m )
else:
break
# Make a list of all the non-whitespace ones.
nw = []
for token in list:
if token.id == __TOKEN_NEWLINE or token.id == __TOKEN_WHITESPACE:
token.iNonWhitespace = -2222
else:
token.iNonWhitespace = len( nw )
nw.append( token )
# Get ready to output sync tokens.
file = open( sys.argv[1], 'r' )
fileLines = file.readlines()
file.close()
curLine = 1
iCur = 0
file = open( sys.argv[1], 'w' )
# Now, search for the patterns we're interested in.
# Look for <ident>::<ident> '(' <idents...> ')' followed by a '{'. This would be a function.
for token in list:
file.write( token.val )
if token.id == __TOKEN_NEWLINE:
curLine += 1
if token.id == __TOKEN_OPENBRACE:
i = token.iNonWhitespace
if i >= 6:
if nw[i-1].id == __TOKEN_CLOSEPAREN:
pos = MatchParensBack( nw, i-2 )
if pos != -1:
if nw[pos-1].id == __TOKEN_IDENT:
#ADD PROLOGUE CODE HERE
#file.write( "\n\tg_pVCR->SyncToken( \"%d_%s\" ); // AUTO-GENERATED SYNC TOKEN\n" % (iCur, nw[pos-1].val) )
iCur += 1
# TEST CODE TO PRINT OUT FUNCTION NAMES
#if nw[pos-2].id == __TOKEN_SCOPE_OPERATOR:
# print "%d: %s::%s" % ( curLine, nw[pos-3].val, nw[pos-1].val )
#else:
# print "%d: %s" % ( curLine, nw[pos-1].val )
file.close()
else:
print "VCRMode_AddSyncTokens <filename>"
|
Countries can enhance endogenous innovation using multifaceted incentives for science and technology indicators. We explore country-level innovation using OECD data for research and development (R&D), patents, and exports. We deploy a dual methodology of descriptive visualization and panel regression analysis. Our results highlight industry variances in R&D spending. As a nation develops, governmental expenditure on R&D decreases, and businesses take on an increasing role to fill the gap, increasing local innovation. Our portfolio of local versus foreign resident ownership of patents highlights implications for taxation/innovation policies. Countries with high foreign ownership of patents have low tax revenues due to the lack of associated costs with, and mobility of income from, patents. We call on these countries to devise targeted policies encouraging local patent ownership. Policy makers should also recognize factors influencing high-technology exports for innovation. Lastly, we call on countries to reinstate horizontal and vertical policies, and design national innovation ecosystems that integrate disparate policies in an effort to drive economic growth through innovation.
Innovation is a key driver of economic growth and a prime source of competition in the global marketplace (Organization for Economic Cooperation and Development (OECD) 2005); at least 50% of growth is attributable to it (Kayal 2008; Organization for Economic Cooperation and Development (OECD) 2005). Notable in this regard are the levels of adoption and creation of technological innovation (Grupp and Mogeec 2004; Niosi 2010) and technological learning (Koh and Wong 2005; Organization for Economic Cooperation and Development (OECD) 2005) in creating this expansion.
A country’s economic growth progresses through three stages of technological change and productivity, namely factor-driven growth, investment-driven growth, and innovation-driven growth (Koh and Wong 2005; Rostow 1959; World Economic Forum (WEF) 2012). Factor-driven economies produce goods based on natural endowments and low labor cost; investment-driven economies accumulate capital (technological, physical and human) and offer investment incentives; and innovation-driven economies emphasize research and development (R&D), entrepreneurship, and innovation. Progressing from one growth stage to another involves transitioning from a technology-importing economy that relies on endowments, capital accumulation, infrastructure, and technology imitation, to a technology-generating economy that focuses on creating new products or knowledge using state of the art technology (Akcali and Sismanoglu 2015). In the creation of new products or knowledge, economies use a systemic approach to represent the interaction of public and private institutions with policies, incentives, and initiatives (Organization for Economic Cooperation and Development (OECD) 1997). Institutions include research entities, public laboratories, innovative enterprises, venture capital firms, and organizations that finance, regulate, and enable the production of science and technology (Malerba 2004; Mazzoleni and Nelson 2007; Niosi 2010). The government also plays a key role in supporting these institutions through policies, targeted incentives, R&D collaboration, and coordinated infrastructure.
According to the Oslo manual (Organization for Economic Cooperation and Development (OECD) 2005), innovation has been defined as the implementation of a new or significantly improved product/service, process, marketing method, or organizational method in business practices, workplace organization, or external relations. Though many innovation studies use variations of this definition, the common thread is Schumpeter (2008). Schumpeter proposes that the crux of capitalism lies in production for the mass market through creative destruction, that is, the continuous process of generating new products, processes, markets, and organizational forms that make existing ones obsolete (Lee 2015). In today’s digital era, technology is integral to advancing such creative destruction. It is no surprise that innovation vis-à-vis technological innovation drives economic growth.
Technological innovation is measured using science and technology (S&T) indicators. These indicators include resources devoted to R&D, patents, technology balance of payments, and international trade in R&D-intensive industries. The importance given to S&T indicators increased with the call for a comprehensive analysis of the economy that not only incorporates economic indicators, but also those that represent knowledge creation (Lepori et al. 2008). However, for S&T to translate to improved economic development (represented by improved quality of life, as well as wealth and employment creation), it needs to be geared towards bringing new products/processes into the marketplace—that is, towards innovation (Siyanbola et al. 2016). Attaining national development goals requires evidence-based and informed policy-making. Incorporating S&T indicators offers the scientific evidence needed to effectively design, formulate, and implement national innovation policies that contribute to economic development. We base our study on this premise and utilize the S&T indicators of R&D, patents, and exports to explore country-level technological innovation for policy analysis. We offer research-based suggestions for governments to compare innovation policy initiatives, seek insights into solving national development problems, identify outstanding best practices, and work collaboratively. The rest of the paper is organized as follows: section 2 describes the research background; section 3 covers methodology; section 4 discusses the analyses results and discussion; section 5 offers scope and limitations; section 6 covers contributions and implications for future research; and section 7 presents the conclusions of our research.
There are several theoretical models that offer rationales for technological innovation. The technology gap model developed by Posner (1961) explains how countries that are technologically advanced introduce new products into a market and enjoy the innovative advantage (Krugman 1986). However, this comparative advantage is transient and shifts over time to other countries that show sustained innovative activities due to technological improvements. The product life cycle hypothesis shows that industrialized countries with a high degree of human capital and R&D investment produce more technical innovations and new products (Maksimovic and Phillips 2008). These countries enjoy the comparative advantage early in the product’s life cycle. But as the country exports and the product becomes more standardized, it allows other countries to reproduce at a lower cost (with advanced technology) and gain market share. The endogenous growth model shows that technology, knowledge, and human capital are endogenous and primary contributors to innovation and economic growth (Gocer et al. 2016). It is clear that technological innovation offers a country a competitive edge that contributes to economic development and growth (Gocer et al. 2016). Therefore, measuring this phenomenon takes on increasing significance at national and global levels. We now describe our research framework and conceptualization of the innovation phenomenon.
In this research, we adapt the comprehensive framework presented at the OECD workshop for national innovation capability using S&T indicators (Qiquan et al. 2006).
According to Miles and Huberman (1994), a conceptual framework explains either graphically or in narrative form the main things to be studied—the key factors, concepts, or variables—and the presumed relationships among them. Using this definition, we have laid out the key concepts in our research and how they relate to the overall phenomenon of innovation.
We conceptualize innovation using the three components of inputs, knowledge creation and absorption, and outputs. Inputs to innovation are represented by efforts at research and development, including the expenditure and personnel hired for R&D. Knowledge creation and absorption represents national efforts at motivating and rewarding the innovative process. The outputs of innovation represent exports of products and services. Figure 1 shows the research framework.
R&D is an important input to national innovation. It includes creative work undertaken systematically to increase the stock and the use of knowledge to devise new applications—both of which have the potential to influence innovation. R&D expenditure is often used to encourage innovation and provide a stimulus to national competitiveness. Research has used R&D expenditure as a percentage of GDP (referred to as R&D intensity) to explain the relationship between firm size and innovative effort (Cohen 2010) and as an input to innovation (Kemp and Pearson 2007). In general, developed countries have higher R&D intensity than developing countries.
For a country, the gross domestic expenditure on R&D (GERD), which represents the expenditure on scientific research and experimental development, offers an indication of the allocation of financial resources to R&D in terms of the share in the GDP. Sufficient R&D funding is essential for innovation, economic growth, and sustainable development. Changes in R&D expenditure suggest evolving long-term strategies and policies related to innovation for economic development. GERD can be broken down among the performance sectors of business enterprise, government, higher education, and private not-for-profit institutions serving households.
Business enterprise expenditure on R&D (BERD) is an important indicator of business commitment to innovation. Although not all business investments yield positive results, efforts towards R&D signal a commitment to the generation and application of new ideas that lead to new or improved products/services for innovation. Research suggests that R&D spending is associated with productivity and GDP growth. An increase of 0.1% point in a nation’s BERD to GDP ratio could eventually translate to a 1.2% increase in GDP per capita (Expert Panel on Business Innovation 2009). Also, over the last few years, R&D intensity in the business sector has varied considerably between countries (Falk 2006). It is, therefore, useful to analyze this at a global level.
The government intramural expenditure on R&D (GOVERD) represents efforts by the government to invest in R&D. Several motivations have been proposed for the same. Endogenous theories present such investment to be the foundation for economic growth (Griliches 1980). Governments respond to market failures in which firms under-invest due to the risk of externalities and information issues (Arrow 1962). Additionally, government funding can stimulate corporate R&D activities (Audretsch et al. 2002; Görg and Strobl 2007). The average government-funded R&D expenditure in 24 OECD countries doubled in three decades, from $6.04 billion in 1981 to $12.3 billion in 2008 (in US dollars, constant prices) (Kim 2014).
Higher education expenditure on R&D (HERD) has been the focus of much research since the 1980s. Research has studied the transfer of knowledge and technology between the university and industry, using firm- or industry-level data (Adams et al. 2001; Collins and Wakoh 2000; Furman and MacGarvie 2007; Grady and Pratt 2000; Siegel et al. 2001). Since the 1990s, higher education institutions have played an increasingly important role in regional and national development in OECD countries, owing to the growth of strategic alliances across industry, research institutions, and knowledge intensive business services (Eid 2012). There is growing recognition that R&D in higher education institutions is an important stimulus to economic growth and improved social outcomes. Funding for these institutions comes mainly from the government, but businesses, too, fund some activities (Eid 2012). Private, not-for-profit organizations are those that do not generate income, profits, or other financial gain. These include voluntary health organizations and private philanthropic foundations. Expenditure on R&D represents the component of GERD incurred by units belonging to this sector.
In addition to expenditure, R&D personnel are an input to the innovation phenomenon in a country. R&D personnel refer to all human capital including direct service personnel, such as managers, administrators, and clerical staff, who are involved in the creation of new knowledge, products, processes, and methods, and can be employed in the sectors of public, private, or academia.
In the innovation framework, knowledge creation is the process of coming up with new ideas through formal R&D (Organization for Economic Cooperation and Development (OECD) 2005). Knowledge absorption is the process of acquiring and utilizing knowledge from entities such as universities, public research organizations, or domestic and international firms (Organization for Economic Cooperation and Development (OECD) 2005). Factors influencing knowledge absorption include human capital, R&D, and linkages with external knowledge sources. On a national level, the creation and absorption of knowledge is manifested through the evolution of intellectual property (IP).
Countries institute regulatory frameworks in the form of patents and copyrights, to protect intellectual property and innovation (Blind et al. 2004). The rationale for protection arises from the fact that innovation amounts to knowledge production, which is inherently non-rival and non-excludable. Non-rival refers to the notion that the amount of knowledge does not decrease when used by others, and non-excludable refers to the unlimited ability of others to use and benefit from the knowledge once it is produced. Countries, therefore, institute legal systems to protect the rights of inventors and patent holders. An example is the Bayl-Dohl Act in the USA. By calibrating the strength of patent protection rights, policymakers can influence national innovation systems (Raghupathi and Raghupathi 2017). Legal protection and exclusivity of the use of knowledge allows investment in R&D and leads to the production of knowledge and innovation. Research in the area of patents has often centered on whether stronger IP rights lead to more innovation (Hall 2007; Hu and Jaffe 2007; Jaffe 2000) and on the endogeneity of patent rights on industries (Chen 2008; Moser 2005; Qian 2007; Sakakibara and Branstetter 2001). As patent rights change at a national level, industries within a country may react differently according to the importance of such rights to the respective industries (Rajan and Zingales 1998). Exploring patent applications or distribution by industry is therefore a key estimator of the extent of national innovation (Qian 2007). Patent laws have a significant effect on the direction of technological innovation in terms of which industries have more innovations (Moser 2005). In addition to using individual patent applications as a measure of patent activity, national innovation research also uses patent families, which are sets of patents/applications covering the same invention in one or more countries. These applications relate to each other by way of one or several common priority filings (Organisation for Economic Cooperation and Development OECD 2009). Patent families reflect the international diffusion of technology and represent an excellent measure of national innovativeness (Dechezlepreˆtre et al. 2017).
In the current research, in contrast to studies that look at domestic patent families in an American (Hegde et al. 2009) or European context (Gambardella et al. 2008; Harhoff 2009; van Zeebroeck and van Pottelsberghe 2011), we adopt a global approach and consider patent families in all three major patent systems: the United States Patent and Trademark Office, the European Patent Office, and the Japan Patent Office. This examination allows us to identify possible international patent-based indicators that enable rigorous cross-country comparisons of innovation performances at national and sectoral levels.
Exports represent an output of the innovative activity of a country. Endogenous growth models suggest that firms must innovate to meet stronger competition in foreign markets (Aghion and Howitt 1998; Grossman and Helpman 1991; Hobday 1995). Firms enhance their productivity prior to exporting in order to reach an optimum level that qualifies them to compete in foreign markets (Grossman and Helpman 1995). Upon entry into the export market, continued exposure to foreign technology and knowledge endows a “learning-by-exporting” effect that offers economies of scale that further enable covering the cost of R&D (Harris and Moffat 2012).
How do the S&T indicators (R&D expenditure, patents, and exports) influence country-level innovation?
How do countries around the world differ in terms of innovation with S&T indicators?
We now discuss our methodology in studying science and technology indicators for national innovation. Table 1 summarizes the research methodology.
We downloaded innovation data from the Master Science and Technology Indicator (MSTI) database of the Organization for Economic Cooperation and Development (OECD), for the years 2000 to 2016 (https://stats.oecd.org). The MSTI database contains indicators that reflect efforts towards science and technology of OECD member countries and non-member countries. The data covers provisional or final results, as well as forecasts from public authorities. MSTI indicators include, among others, resources dedicated to research and development, patent (including patent families), and international trade in R&D intensive industries. Table 2 shows the variables in the research.
The total percentage of patents invented with foreign co-inventors under EPO where reference date is prior date.
The total percentage of patents owned by foreign residents in EPO where reference date is prior date.
The total percentage of patents invented abroad in EPO where reference date is Prior Date.
Research and development variables include gross domestic expenditure on R&D (GERD) for the sectors of business enterprise, higher education, government, and private/non-profit households; funding sources include industry, government, abroad, and other national sources. For business enterprise, we included the industries of aerospace, computer/electronic/optical, and pharmaceutical. We also look at R&D personnel as a percentage of the national total in the abovementioned business sectors. In patents, we consider indicators such as the number of triadic patent families, which includes patents filed at the offices of the European Patent Office, the United States Patent and Trademark Office, and the Japan Patent Office, for the same invention by the same applicant or inventor. Patent variables also include those representing the international flow of patents and cross-border ownerships in the inventive process. Patents are considered for the technology sectors of biotechnology, information and communication technology (ICT), environmental technology, and pharmaceutical. As exports represent the outputs of the innovative process, we consider total exports in the three industries of aerospace, computer/electronic/optical, and pharmaceutical.
We use panel data that includes variables for multiple indicators spanning multiple countries and time period. There are several ways to group country-level data. We use the income-level classification of the World Bank of high, upper-middle, lower-middle, and low income. However, due to the lack of availability of data, we only focus on upper-middle and high-income categories. In addition, we use the region classification of East Asia and Pacific, Europe and Central Asia, Latin America and the Caribbean, Middle East and North Africa, North America, South Asia, and Sub-Saharan Africa.
For the data analysis, we use the platforms of Tableau, which is an advanced business intelligence and data mining tool, for visualization and descriptive analysis, and R for the panel regression analysis.
We use the visualization tools in Tableau to analyze and get insight from the innovation data. The goal of visualization is to analyze, explore, discover, illustrate, and communicate information. In today’s digital era, users are overwhelmed with myriad information. Visualization provides models of the information (Khan and Khan 2011; North 2005) and makes intelligible huge amounts of complex information. The visual analytics capabilities of the business intelligence software, Tableau, represent data graphically, filter out what is not relevant, drill down into lower levels of detail, and highlight subsets of data across multiple graphs—and do all these simultaneously. This level of customization results in insights unmatched by traditional approaches. Static graphs delivered on paper or electronically via computer screen help communicate information in a clear and enlightening way, an enormous benefit. But users derive the greatest benefits from Tableau’s visual analytics. Visual representation allows one to identify patterns that would otherwise be buried in vast, unconnected datasets. Unlike the traditional hypothesis-and-test method of inquiry, which relies on asking the right questions, data visualizations bring themes and ideas to the surface without pre-conceived assumptions and where they can be easily discerned. In the current context of international or global data, we integrate and display dimensions of information that are needed to rapidly monitor aspects of global innovation on interactive dashboards. In this way, we discover patterns and seek relationships in various areas of innovation inquiry for policymaking and national development. In addition to visualization approach to analysis, we also deploy the econometric panel regression analysis. Our research contains panel data of observations of multiple phenomena obtained over multiple time periods for different regions. Using panel analysis allows us to address analytical questions that regular cross-sectional or longitudinal analysis cannot. In addition, it provides for means to handle missing data; failing which, problems such as correlation of omitted variables with explanatory variables can undermine the accuracy of the effects.
We discuss the results of our analysis and offer a comprehensive summary for each component in the research. We first discuss the visualization approach and then the panel analysis approach.
Expenditure on R&D encompasses gross domestic expenditure (GERD), business enterprise expenditure (BERD), government intramural expenditure (GOVERD), and higher education expenditure (HERD). We analyze the distribution of GERD by country (Fig. 2).
As seen in Fig. 2, there are variations between countries in the expenditure on R&D. Some countries take the lead in GERD. For example, Israel takes the lead in GERD, with 25% more than the expenditure of Japan, double that of China, and quadruple that of Chile. Others include South Korea, Finland, Austria, Denmark, Chinese Taipei, and the USA. In terms of the trend of expenditure, there are countries that show a spiked increase over the years—South Korea shows an increase from 2.18 to 4.23% from 2000 to 2015; China from 0.885% in 2000 to 2.067% in 2015; and Denmark from 2.32 to 2.95% between the years 2001 and 2015. Some countries show a consistent range of R&D expenditure for this same time span. The USA, for example, shows between 2.62 and 2.79% for the period 2000 to 2015. Mexico shows a minimal change from 0.33 to 0.49% between the years 2000 and 2015. In terms of R&D intensity, countries like Chile, Mexico, and Romania show a consistently low rate (between 0.2 and 0.4%), while others like Canada actually show a decrease over the years (1.865% in 2000 to 1.605% in 2014). A country’s increase in expenditure on R&D reflects a dedicated effort towards national innovation.
We analyze the distribution of the expenditure for each of the performance sectors of business, government, and higher education. Figure 3 shows the distribution of business enterprise expenditure on R&D (BERD) as a percentage of GDP among countries.
BERD encompasses all R&D activities performed by enterprises and is normalized to account for the size of the country. As seen in Fig. 3, most countries show an increasing trend in BERD for the years 2000 to 2015. Israel takes the lead in BERD, followed by South Korea and Japan. Chile, Romania, and Latvia have the lowest expenditure. The distribution of government intramural expenditure on R&D (GOVERD) is shown in Fig. 4.
As Fig. 4 shows, most countries show a decreasing trend in government expenditure on R&D for the years 2000 to 2015. Iceland shows a large drop, from 0.66% in 2000 to 0.11% in 2010. Chinese Taipei also shows a decrease, from a relatively high 0.55% in 2003 to 0.38% in 2015. Other countries with a relatively low share of GOVERD also demonstrate a decreasing trend over these years. For example, Denmark shows a fall from 0.27 to 0.07% in 2015. In general, we see that government expenditure on R&D takes a decreasing pattern over time. A possible explanation is that research carries a degree of uncertainty and risk, initially keeping businesses and private investors at bay and forcing the government to step in to fill this void and encourage innovation (Walwyn and Cloete 2016). The implications will be discussed in detail in the “Conclusions” section. Figure 5 shows the distribution of higher education expenditure (HERD).
As Fig. 5 illustrates, a majority of countries show an increase over the years. Denmark, in particular, shows a large increase, from 0.44% in 2000 to 0.99% in 2015. Sweden, Switzerland, Portugal, and the Czech Republic show similar increases. A few countries demonstrate a decrease; Hungary is one of them (0.25% in 2003 to 0.17% in 2015). For the most part, emphasis among countries on HERD is a promising trend. It reflects increasing awareness that R&D in higher education institutions offers integral stimulus to economic growth and innovation. It follows that most higher education institutions receive their funding from national governments and businesses (Eid 2012).
Using regional and income-level classifications, we looked for differences in the distribution of expenditure (GERD) in each of the performance sectors of business, government, and higher education. Figure 6 shows the sectoral distribution by region.
Figure 6 shows that for all regions, the business sector shows the highest expenditure, followed by the education and government sectors. The private non-profit shows the lowest expenditure. In Europe and Central Asia, there is a greater proportion of expenditure from the higher education sector than the government sector. In terms of government expenditure on R&D, the Sub-Saharan region shows the highest spending, while the region of North America shows the lowest. We then mapped the sectoral distribution by income level (Fig. 7).
In both the high- and upper-middle-income countries, the business sector has the highest expenditure on R&D. This is followed by the higher education sector in the high-income countries and the government sector in the upper-middle-income countries. Expenditure on R&D from the private, non-profit sector is negligible for both income levels. It can be seen that the governments of high-income countries are less focused on spending on R&D. The implication of this finding is discussed in our conclusions.
The source of funds for GERD is an important factor in analyzing the pattern of investments and expenditure for R&D. We wanted to see if there were any identifiable patterns in country-level funding for R&D. Figure 8 shows the source of funds for R&D expenditure for regions around the world.
Figure 8 shows the expenditure financed by government, industry, and other national and foreign sources. Funding from abroad represents an important source for expenditure because R&D is an activity that entails significant transfer of resources between such entities as organizations and countries. Our analysis shows notable regional variation in the structure of funding. For instance, in East Asia and Pacific, industry accounts for funding more than three fifths of total GERD (66.51%). However, it has a relatively low share of funding by government, national, and foreign sources. In the Sub-Saharan region, 44.54% of expenditure is funded by industry and an almost equal share is sourced by the government (41.31%). It therefore follows that, contrary to our hypothesis, only the developed regions show more funding from the industry while the developing (upper middle income) regions show at best equal or more funding from the government.
Industries play a vital role in business R&D expenditure and are pivotal to improving the innovative landscape of a country. Figure 9 shows the distribution of BERD among regions for the industries of service, aerospace, computer/electronic/optical, and pharmaceutical.
Figure 9 shows that in East Asia and Pacific, the highest expenditure is by the computer/electronic/optical industry; service follows. In Europe and Central Asia, expenditure in the year 2000 is equal across all four industries, but during later years, the service industry dominates expenditures. The distribution is different in Latin America and Caribbean, where more than 95% of expenditure is concentrated on the service and computer industries over the span we are studying. In North America, the service and computer industries dominate throughout the time span, with the aerospace industry taking the smallest share—less than 20%. In the sub-Saharan region, the service industry plays a dominant role. Note that our analysis for this region is limited because data is missing for the years 2010 to 2015, but we chose to include in our analysis the data that is available in order to provide as holistic an outlook as possible. In general, it appears that developed regions focus more on the technology sectors while developing regions focus more on service sectors.
We looked at the number of researchers engaged in each performance sector as a percentage of the national total. Regions and countries differ in the allocation of personnel in the different sectors of business, government, and higher education. In developing regions since the government invests more in R&D expenditure, we expected the same trend to reflect in personnel. Figure 10 shows the sectoral distribution of R&D personnel as a percentage of national totals for each region.
Figure 10 clearly displays structural variations in the regions for R&D personnel. The regions of East Asia and Pacific, Europe and Central Asia, and North America reveal a steady business sector pattern, engaging the highest share of personnel; the government sector engages the lowest. In other regions, such as Latin America and the Caribbean, the education sector leads in personnel. The business sector is significant as it shows a steady annual increase. It appears that, in developed regions the business sector engages a higher share of R&D personnel than in developing regions. We now move on to analyze patents.
The average number of patent applications filed under the Patent Cooperation Treaty (PCT) reflects the extent of technological innovation in a country (Fig. 11). Figure 11 shows the patent applications filed under PCT for the years 2000 to 2016.
In the figure, green denotes a high number and red a low number of applications. The USA has the highest number of patent applications, followed by Japan and China. By comparison, Russia, Argentina, and other countries see a very low number of patent applications, signaling a need for innovative focus.
As for the number of patent applications filed under European Patent Office (EPO) (Fig. 12), the USA leads Japan, Germany, and France. We now turn to applications for triadic patent families (Fig. 13).
A triadic patent family is defined as a set of patents registered in patent offices of various countries to protect the same invention. The European Patent Office (EPO), the Japan Patent Office (JPO), and the United States Patent and Trademark Office (USPTO) are the three major patent offices worldwide. Counting triadic patent families begins with each inventor’s country of residence and the initial date of registration of the patent. Indicators based on patent families normally enhance the international comparability and the quality of patent indicators. The greatest number of triadic patent families originated in the USA, followed by Japan, Germany, and France. In general, countries like the USA, Japan, Germany, and France are high in the number of patent filings under EPO and PCT.
Co-inventions reflect international collaboration and represent the flow of information and knowledge across borders. They also indicate the flow of funds from multinational companies for R&D. Figure 14 shows the analysis for the percentage of patents owned by foreign residents under EPO.
In Fig. 14, countries represented in green indicate a high percentage of ownership of patents with foreign residents; the darker the green, the higher the percentage. Countries in red indicate a low percentage of ownership. The US, Japan, and other EU economies have a relatively low share of patents owned by foreign residents, and most of the patent ownership in these countries is local. By contrast, countries like Argentina, Russia, and Mexico show a high percentage of patents owned by foreign residents. These countries rely on foreign collaboration to strengthen their resources and facilities for innovation (Raghupathi and Raghupathi 2017). This signals a need to strengthen local innovation by targeting education systems to offer relevant skills and knowledge that foster growth. The portfolio of ownership of patents between local and foreign residents is an interesting revelation that offers implications for national policies on taxation and innovation, and will be discussed further in our conclusions. In the analysis of the percentage of patents invented abroad (Fig. 15), the difference globally is not as varied as it is for patents owned by foreign residents (as shown earlier, in Fig. 14).
As Fig. 15 illustrates, the majority of countries in our study show less than 1% of patents are invented abroad. Only Switzerland (1.16%) and Ireland (1.28%) show relatively high levels.
We looked next at the differences in distribution of patents by sector. We considered the various technology domains of environmental technology, pharmaceutical, ICT, and biotechnology. Figure 16 shows the comparison of the number of patents in each technology domain to the benchmark of the total number of patent applications under PCT.
Environmental technology is the application of environmental science, green chemistry, environmental monitoring, and electronic technology to monitor and conserve the natural environment and resources and to mitigate the negative effects of humans on the environment. Sustainable development is the primary objective of environmental technology. Figure 16 shows the number of patents in the environmental technology sector and indicates a steady increase from 2000 to 2011 and a sudden decrease from 2011 to 2013. That said, the total number of PCT patents over the years shows a trend of increase. The biotechnology sector includes technological applications that use biological systems or living organisms to develop or make products for specific use. The number of patents in the pharmaceutical and biotechnology sectors has been increasing over the years. Among all the sectors and throughout the time span we studied, ICT saw a high number of patents and a consistently positive trend. The number of patents in this sector equals the total number under PCT, highlighting the overall dominance of the ICT sector in the patent industry.
We then looked for significant associations among sets of innovation indicators. We started with expenditure on R&D and the R&D personnel in the sectors of business, government, and higher education.
Governments use R&D statistics collected by countries from businesses to make and monitor policy related to national science and technology. These stats also feed into national economic statistics, such as GDP and net worth. Different performance sectors may have different kinds of associations between R&D expenditure and personnel. Figure 17 shows the associations between BERD and R&D personnel in the business sector.
Figure 17 shows a significant positive association (p < 0.0001) in the business sector between expenditure on R&D (BERD) and R&D personnel. Interestingly, the implication is that we should also see an increase in R&D personnel in this sector. But this is not the case when we break down the analysis by region. In Latin America and Caribbean, while the percentage of researchers in the business sector is high, the expenditure is consistently low for all countries in the region. The explanation is that in developing regions such as these, while there is recognition of the need to focus on R&D by employing more personnel, the cost of deployment is relatively low. By contrast, in North America, both the R&D expenditure and R&D personnel are high, likely because of the high cost of deployment of personnel. In East Asia and Pacific, while some countries are high in both business expenditure and personnel, others are low in both. Across all regions, some countries show a large business expenditure on R&D with no associated increase in personnel. Examples include Israel in Latin America and Caribbean, Japan in East Asia and Pacific, and Slovenia in Europe and Central Asia. On the flip side, there are countries, Romania and Ireland (in the region of Europe and Central Asia) among them, that show a large fluctuation in personnel with little change in expenditure. In general, there is a positive association between R&D expenditure and R&D personnel in the business sector.
Figure 18 shows the analysis of R&D personnel and intramural expenditure on R&D (GOVERD) for the government sector.
Figure 18 shows a significant positive association (p < 0.0001) between GOVERD and R&D personnel. Though the region of East Asia and Pacific is similar to Latin America and Caribbean in government expenditure on R&D, it has a lower percentage of R&D personnel. This can be attributed to a relatively high cost of labor in East Asia & Pacific compared to Latin America & Caribbean. In North America, both expenditure and personnel are lower than those of East Asia & Pacific. This is due to the increased emphasis on R&D by the business sector over the government. Figure 19 shows the relationship between expenditure and personnel in the higher education sector (HERD).
In the case of higher education (Fig. 19), we did not find a significant association between the expenditure on R&D and the percentage of researchers (p > 0.05).
It is important to analyze gross domestic expenditure on R&D (GERD) because it represents an aggregate of the sectors of business, government, and higher education and because it is considered the preferred method for international comparisons of overall R&D expenditure. Figure 20 shows the relationship between expenditure (GERD) and R&D personnel in all the sectors.
The relationship between GERD and the percentage of researchers is significant and positive (p < 0.0001) for the business sector, but significant and negative (p < 0.001) for the government and higher education sectors. This means that in the business sector, an increase in expenditure is associated with an increase in the R&D personnel percentage of researchers). In the government and higher education sectors, an increase in expenditure is associated with a decrease in personnel. This highlights the fact that, in general, most of the R&D expenditure and personnel come from the business sector and not from the government or higher education sectors. We searched for associations between national exports and business expenditure on R&D (BERD) by industry to see if there were dominant patterns.
We analyze business expenditure on R&D and exports for different industries. Figure 21 shows the analysis for the aerospace industry.
Figure 21 depicts the exports and BERD for the aerospace industry for each country. The intensity of the color indicates the quantity of exports, while grid size denotes expenditure. Only countries for which data can be adequately mapped are shown in the diagram. The USA is the leader in exports and expenditure in this industry. France, Germany, the UK, and others show high exports and relatively low expenditure. Japan and China are low in both expenditure and exports in the aerospace industry. Figure 22 shows the exports and business expenditure on R&D for the computer/electronic/optical industry.
In the computer/electronic/optical industry, the USA leads China, Japan, and Korea in terms of expenditure. In terms of exports, China is the leader. While the UK performed well in exports in the aerospace industry (shown in Fig. 21), it fares low in both exports and expenditure in the computer/electronic/optical industry. Italy and Israel show very low exports and expenditure in this industry. Figure 23 illustrates the analysis of exports and business expenditure for the pharmaceutical industry.
Figure 23 shows the USA leading in business expenditure on R&D in the pharmaceutical industry. In exports, Germany takes the lead, followed by the USA, the UK, France, Switzerland, and Belgium. China and Japan have very low exports but moderate expenditure, while Canada and Korea are low in both exports and expenditure in the pharmaceutical industry.
Overall, the USA ranks high in exports and expenditure on R&D, in all three industries; China does best in the computer industry; Japan does better in the computer and pharmaceutical industries than in aerospace; and the UK fares best in the pharmaceutical industry. Countries can use these findings to adjust their resource allocations to R&D in terms of industries.
We analyzed the association between R&D expenditure and international cooperation in patents (Figs. 24, 25, and 26).
Figure 24 shows the association between the percentage of patents invented with co-inventors and the four kinds of R&D expenditure. The association is significant (p < 0.0001) and negative for all types of R&D expenditure (GERD, BERD, GOVERD, HERD). With an increase in the percentage of R&D expenditure from any sector, the percentage of patents invented with co-inventors decreases, implying more local innovation. This holds promise for countries trying to enhance innovation in terms of their contribution to GDP.
For patents owned by foreign residents (Fig. 25), there is a significant negative relationship (p < 0.0001) with all four types of R&D expenditures. With an increase in expenditure on R&D in any sector, there is a decrease in the percentage of patents owned by foreign residents. Specifically, our analyses reveals that with a 1% increase in gross expenditure on R&D, the % of patents owned by foreign residents decreases by 12.27%; in case of business expenditure, the % of patents decreases by 14.07%; in case of government and education expenditure, the decrease is significant with 58.95% and 46.25% respectively.
However, the association of R&D expenditure with patents invented abroad (Fig. 26) differs from that with patents invented with co-inventors and with foreign residents. There are significant and moderately positive associations between patents invented abroad and R&D expenditure in all sectors, with the exception of government. As the R&D expenditure in the business or higher education increases, the percentage of patents invented abroad increases. In the face of increasing expenditure, local innovation becomes more expensive, leading to more foreign collaboration. The exception is with government expenditure. In this case, the relationship is significantly negative in that as the government expenditure on R&D increases, the percentage of patents invented abroad decreases. This finding has important policy implications for governments of developing countries, which should direct more resources to R&D with a view to improving local innovation and its contribution to GDP. This is discussed in detail in the conclusions section.
We now discuss our second approach of econometric panel analysis.
Our panel analysis follows a threefold structure: first, we perform a regression analysis on exports for each industry; we then analyze the influence of R&D expenditure on patents; and lastly, we explore the international ownership of and investment in patents and the influence on exports. We deploy the PLM package of R for all the analyses.
There are certain steps that need to be taken in order to prepare the data for panel analysis. As a first step to ensuring integrity, we inspected the dataset for missing data (Table 3). As the results show, some variables had more than 20% missing values. We deleted these and used the Random Forest algorithm to fill in values for the remaining variables.
The descriptive statistics for the complete dataset are depicted in Table 4.
The next step was to ensure that the data is stationary and usable for panel analysis. For this, we did unit root testing with Augmented Dickey Fuller (ADF) values (Table 5). As seen in Table 5, the ADF values are all significant (p < 0.01), confirming the appropriateness of data for panel analysis.
The next test was to check for multicollinearity among variables. Since the preliminary correlation analysis revealed high correlation between certain variables, we did the variance inflation factor (VIF) for the variables within each industry (Table 6). The results showed some VIFs above 10, confirming multicollinearity. We therefore deleted these variables and reran the test for the remaining. The results were now satisfactory, with all VIFs below 10. Table 5 shows the results before and after multicollinearity analysis for each industry. The variables are now ready to be deployed into a regression model for each industry.
In this equation, a represents the intercept, βit represents the coefficient for each attribute, xkit represents the attributes, and εit represents the residual of this model.
In the equation above, λi represents the intercept for each individual in the panel dataset, βit represents the coefficient for each attribute, xkit represents the attributes, and εit represents the residual of this model.
In the equation, a represents the intercept of the model, λi represents the intercept for each individual in the panel dataset, βit represents the coefficient for each attribute, xkit represents the attributes, and εit represents the residual of this model. However, as seen from the results of the ADF test (Table 5), all the variables satisfied the stationary assumption, thereby eliminating the need for first-differenced models. We therefore decide on the final model through a two-step comparison between the pooling (mixed effects) and fixed effects model, and the fixed effects and random effects model. We first use the analysis of variance (ANOVA) F test for both the time and country dimensions. Then, we use the L-M test to check for random effects, and the Hausman test to compare the leading influence of fixed or random effects models. We now discuss the regression analysis by industry.
We first did a regression analysis on exports for the computer industry. Table 7 shows the results for the comparison of the pooling (mixed) and fixed effects models for the industry.
As shown in Table 7, the individual fixed effects model is better than the mixed effects model (p < 0.0001) and the time fixed effects model (p < 0.0001). We used the LM test (Table 8) to check for random effects, and the Hausman test (Table 9) to analyze the significance of the effects.
The LM test (Table 8) confirms the significance of the random effects (p < 0.0001). The results of the Hausman test (Table 9) indicate that the random effects model is better than the fixed effects model. Accordingly, we ran the random effects model for the computer industry (Table 10).
As Table 10 shows, five variables are significant in influencing exports: business expenditure (BERD), government expenditure (GOVERD), income level, gross expenditure on high education performance (GERD_HIGH_PERFORM), and patent filings under EPO (PATENT_EPO). Of these, government expenditure is the most significant, since we see that a unit increase in government expenditure is associated with an almost 3-unit (2.8649) increase in exports. Time and country are significant influencers in this industry, along with government and business expenditure, income level, and educational performance. Even though patents have a positive influence on the final exports, the low coefficient (0.0000794) indicates that the effect is not as significant as for other variables.
Table 11 shows the comparison between the pooling (mixed) and fixed effects models for the aerospace industry.
As shown in Table 11, the individual fixed effects model is better than mixed effects model (p < 0.0001). In comparing the mixed and time fixed effects models, we see that that time is not a significant influencer of exports in the industry. The results of the comparison between fixed effects models also support the conclusion that individual fixed effects model is better (p < 0.0001). Next, we performed the LM test to check for random effects (Table 12) and the Hausman test to compare the random and the fixed effects models (Table 13).
The results of the LM test (Table 12) show the random effects to be significant (p < 0.0001) in the analysis for the industry. The results of the Hausman test (Table 13) show the fixed effects model to be better than the random effects model (p < 0.0001).
We therefore ran a fixed effects model analysis for the aerospace industry (Table 14).
However, the results in Table 14 show that the model is not significant (p > 0.05) in explaining the relationships in the panel dataset for the industry. In light of this, we performed a random effects model analysis for the aerospace industry (Table 15).
In the random effects model shown in Table 15, the variables that have a positive influence on exports include business expenditure (BERD), government expenditure (GOVERD), income level, gross expenditure in the performance sector of higher education (GERD_HIGH_PERFORM), patent filings under EPO (PATENT_EPO), under ICT (PATENT_ICT), triadic patents (PATENT_TRIADIC), patents in environmental technology (PATENT_ENV), and OECD membership (OECD_MEM). In terms of patents, we see that all four types of patents have a positive influence on exports and the influence is much higher here than in the computer industry. Income-level and OECD status (being a member) show a positive influence on exports in this industry.
The comparison between the fixed and mixed effects for the pharma industry (Table 16) shows that the individual fixed effects model is better (p < 0.0001). Time does not seem to be a significant factor in this industry (p > 0.05). In the comparison between the fixed effects models, the results also support the conclusion that individual fixed effects model is better (p < 0.0001).
We did the LM test to see if random effects are significant (Table 17) and the Hausman test to compare the random and fixed effects models (Table 18).
From the LM test results (Table 17), we see that the random effects are better (p < 0.0001), and from the Hausman test (Table 18), we see that the random effects model better explains the relationships in this industry. Accordingly, we did the random effects model for the pharma industry (Table 19).
The results in Table 19 show that patents, in terms of the number of applications, as well as ownership and funding from abroad, play a more prominent role in exports of this industry than in the others.
In this section, we analyze the relationship between the number of patent applications and the variables relating to government, such as expenditure and personnel. The dependent variable is the number of patents under PCT (PATENT_PCT). The six independent variables include GOVERD, GOVER_RESEARCHER, INCOME_LEVEL, OECD_MEM, GERD_GOV_PERFORM, and GERD_GOV_FINANCE. Using a correlation matrix, we found that most of the coefficients are less than 0.5 for the variables (Table 20). We also calculated the variance inflation factor (VIF) to test for multicollinearity (also shown in Table 20). The results of the table show all variables having a VIF of less than 5, thereby indicating no multicollinearity.
Table 21 shows the comparison between the pooling (mixed) effects and the fixed effects models. The results indicate that the individual fixed effects model is better than both the mixed effects (p < 0.0001) and the time-fixed effects models (p < 0.0001).
We did the LM test (Table 22) and confirmed that the random effects model is significant (p < 0.0001) in this analysis. From the Hausman test (Table 23), we confirmed that the individual fixed effects model is better than the random effects model (p < 0.0001). Accordingly, we ran the individual fixed effects model for patents (Table 24).
The results of the individual fixed effects model (Table 24) show government expenditure on R&D (GOVERD) and gross expenditure on R&D in the government sector (GERD_GOV_PERFORM) as having a significant positive effect on the number of patent applications (PATENT_PCT). GOVERD, in particular, has a very large influence—a unit increase brings about 7040 unit increase in patents. This reflects the importance of government investment in R&D as a driving factor for patents and for innovation.
We now explore the influence of variables relating to gross expenditure on R&D (GERD, GERD_BUS_PERFORM, GERD_GOV_PERFORM, GERD_OTHER_FINANCE, GERD_IND_FINANCE, GERD_ABROAD_FINANCE, and GERD_HIGH_PERFORM), on the percentage of patents owned by foreign residents (COOP_OWN). We checked for multicollinearity with the VIF test and resolved it by deleting the affected variables, and redoing the test. Table 25 shows the results before and after multicollinearity analysis.
Table 26 shows the comparison between pooling (mixed) and fixed effects models.
The results in Table 26 show that the fixed effects model is better (p < 0.0001) than the mixed effects model. Also, between the fixed effects model, the individual effects model is better (p < 0.0001). This means the influence of R&D expenditure on patents varies among countries. On the contrary, the fixed effects model has no effect by time (p > 0.05). Table 27 shows the LM test, and Table 28 shows the Hausman test.
From the results of the LM test (Table 27), we see that the random effects model is significant (p < 0.0001). The Hausman test (Table 28) shows that the individual fixed effects model is good for this analysis (p < 0.0001). Therefore, we constructed the individual fixed effects model for the variables (Table 29).
According to the model results in Table 29, GERD financing from abroad (GERD_ABROAD_FINANCE) and gross expenditure on R&D by higher education sector (GERD-HIGH-PERFORM) have negative coefficients. This is the case with business (BERD) and government expenditures (GOVERD) as well. Of the variables, government expenditure has the largest influence on patent ownership by foreign residents since a unit increase in government expenditure on R&D is associated with a decrease of 27.94 units in patents owned by foreign residents. This depicts a large impact.
In summary, a few things stand out from the panel analysis. First, it follows that the random effects model is better at explaining the relationship between exports and the independent variables in all the three industries. Time and country factors are significant in the model. Government expenditure on R&D has a positive influence on exports in all three industries (coefficients = 2.865, 0.085, and 0.563 respectively), implying that increased government expenditure will positively influence innovation through exports. Business expenditure, government expenditure, and gross expenditure in higher education are all key drivers for country-level innovation.
Second, the computer industry has the largest R-square value among the three industries (0.41 in computer, 0.02 in Aerospace, and 0.005 in Pharmaceutical), indicating that the independent variables will have a stronger influence on exports in the computer industry than in the others. The results also point out that, of the three, investment of R&D in the computer industry will have a stronger influence on national innovation.
Third, for all three industries, in terms of patents, government expenditure and government financing have a positive correlation with the number of patent applications. For instance, one unit increase in government expenditure (GOVERD) shows a 7040-unit increase in number of patent applications. This reflects the potential for government investment in R&D to influence innovative efforts.
Fourth, there is a negative correlation between R&D expenditure being financed from abroad and patents owned by foreign residents. This reflects the necessity for countries to ramp up internal financing for R&D so as to encourage local innovation.
There are some limitations to our study. First, although we cover a period of 16 years, future studies can look at a longer span, facilitating the prospect of uncovering more trends and patterns in the data. Second, we explore associations but not causality in the relationships among S&T indicators for innovation. Third, we consider a small segment of innovation indicators relating to S&T, whereas there is a gamut of variables that can be incorporated in future research. Fourth, the data are extracted from a secondary data source (OECD); the aggregated data from multiple sources/models inherently poses some limitations. For instance, data is missing for some years or regions—either it was not collected, or it was collected but not reported. We use patents as an indicator of innovation, but patents are subject to certain drawbacks. Many inventions are not patented, or inventors utilize alternative methods of protection such as secrecy and lead-time. Furthermore, the propensity for patenting varies across countries and industries. Differences in patent regulations and laws pose a challenge for analyzing trends and patterns across countries and over time.
Despite the limitations, our study contributes in many ways to the literature on innovation and policy making. While most studies adopt a firm or enterprise level of innovation analysis, we deploy a country-level analysis using a large and comprehensive dataset from the OECD. The breadth of the indicators in the dataset allows for in-depth multi-dimensional analysis. We utilize a dual methodology of visualization and panel analysis, each of which offers a suite of benefits in terms of research insights and knowledge on a phenomenon. Visualization is an assumption-free and data-driven approach, allowing the data to speak for itself. With no pre-conceived notions, the methodology allows for previously undetected patterns and relationships to emerge from the data. Panel analysis provides the researcher with a large number of data points and reduces the issue of multicollinearity among the explanatory research variables. It therefore improves the efficiency of econometric estimates and allows for multidimensional investigation of a phenomenon. In addition to the contributions in terms of methodologies, the research adds to the literature on empirical innovation studies that deploy an analytic approach. By comparing innovation indicators at a national level, this study calls on policy makers to design appropriate horizontal or vertical S&T policies. The analysis of R&D expenditure by sector and by industry, along with R&D personnel, allows for effective and optimum resource allocation and talent distribution. Patent analysis is done incorporating individual applications as well as triadic families, thereby offering an individual and holistic perspective. The study presents insights on the phenomenon of international collaboration in the inventive process. By focusing on cross-border ownership of patents, the study highlights the international flow of knowledge and research funds between countries, offering valuable lessons for global policy making for innovation.
While the study offers a panorama of results for innovation using S&T indicators, more theoretical and empirical work is needed to further advance our understanding of country-level innovation. An important direction for future work is to incorporate other forms of IP in addition to patents in analyzing national innovation. More research is needed in terms of analyzing the relationship to important policy concepts and their performance in different domains. We view the culture in innovation a particularly fertile area. It is not only important for a nation to be creative in imagining, developing, and commercializing new technologies, products, and services, but also to be scalable in terms of attitudes towards adapting to change and willingness to take risks (Hofstede 2001; Strychalska-Rudzewicz 2016).
Our analysis shows differences between developed (high income) and developing (middle income) countries in terms of sectoral R&D expenditure. While in developing countries, R&D expenditure is predominantly from the government sector, in developed countries, it is from the business sector. As a nation develops, governmental expenditure on R&D decreases. Businesses and education take on greater roles to fill the gap. It is no surprise that most higher education institutions receive funding from governments and businesses.
The analysis of patents offers interesting revelations. Countries (such as the USA, Japan, Germany, and France) with a high share of patents (individual and patent families) show a low percentage of patents owned by foreign residents. The portfolio of local versus foreign resident ownership of patents offers major implications for taxation and innovation policies at a national level. Countries with high foreign ownership of patents have low tax revenues as a percentage of GDP (Raghupathi and Raghupathi 2017). This is due to the fact that patents can be granted in countries other than those in which they were created, primarily because of the lack of associated costs and because income from IP is mobile (Griffith et al. 2014).
Naturally, multinational companies routinely search for tax havens to locate their IP (Lipsey 2010). In order to spur more local innovation, countries with a large percentage of patents with foreign ownership introduce “patent boxes” that lower the tax rate on income derived from patents. For example, Belgium reduced the tax rate from 34 to 6.8% in 2007, Netherlands from 31.5 to 10% in 2007, Luxembourg from 30.4 to 5.9% in 2008, and the UK from 30 to 24% in 2013 (Griffith et al. 2014). We call on countries that have a high proportion of foreign ownership of patents to devise policies and incentives to encourage ownership by local residents and boost innovation and tax revenues.
Exports are an important aspect of innovation, and there is a relationship between exports and RD expenditure in an industry. A high level of expenditure on R&D enables more exports by meeting higher standards, while a high level of exports allows countries to recover sufficient capital to focus on R&D. Countries’ promotion efforts for exports should parallel ways to maximize innovation creation and economic development (Leonidou et al. 2011). Policy makers need to recognize that the extent of experience within a country affects which factors influence high-technology exports. Additionally, different innovation characteristics can be identified and encouraged to support national innovation.
Lastly, countries need to reinstate horizontal and vertical S&T policies for innovation (Niosi 2010). Horizontal policies apply equally to all sectors (e.g., tax credit for R&D). While these policies are easy to implement and can strengthen existing sectors, they do not contribute to creation of new sectors. For new sectors to emerge, specifically high technology ones that contribute to growth, resources have to be concentrated in that direction. This is very important for developing countries that are looking to reap comparative advantages in targeted sectors. These countries need to develop and apply vertical policies directed to selected sectors. Since we show in our analysis that the ICT sector takes the lead in the number of patents over the years for most countries, it would be worthwhile to target resources and efforts in this sector to boost national innovation. Stimulating local innovation lowers both a dependence on foreign collaboration and foreign patent ownership. This holds promise for countries looking to enhance the contribution to GDP. Additionally, with globalization, economies are moving towards service-based and knowledge-based industries that are primarily ICT-driven, encouraging new patterns of growth and innovation (Raghupathi et al. 2014).
Countries can attain a level of endogenous innovation using multifaceted incentives for science and technology indicators. However, these policies and reforms need to be constantly evaluated and revised in light of the evolutionary economic and educational infrastructure. We propose a call on countries to design sophisticated national innovation ecosystems that integrate disparate policies of science, technology, finance, education, tax, trade, intellectual property, government spending, labor, and regulations in an effort to drive economic growth by fostering innovation.
The datasets supporting the conclusions of this article are available in the OECD repository, https://stats.oecd.org.
Viju Raghupathi is an Associate Professor at the Koppelman School of Business, Brooklyn College of the City University of New York. She received her PhD in Information Systems from The Graduate Center, City University of New York. Her research interests include business analytics, social media, big data, innovation/entrepreneurship, sustainability, corporate governance, and healthcare. She has published in academic journals including Communications of the AIS, Journal of Electronic Commerce Research, IEEE Access, Health Policy and Technology, International Journal of Healthcare Information Systems and Informatics, Information Resources Management Journal, and Information Systems Management.
Wullianallur Raghupathi is a Professor of Information Systems at the Gabelli School of Business, Fordham University, New York; Program Director of the M.S. in Business Analytics Program; and Director of the Center for Digital Transformation (http://www.fordhamcdt.org/). He is co-editor for North America of the International Journal of Health Information Systems & Informatics. He has also guest edited (with Dr. Joseph Tan) a special issue of Topics in Health Information Management (1999) and a special section on healthcare information systems for Communications of the ACM (1997). He was the founding editor of the International Journal of Computational Intelligence and Organizations (1995–1997). He also served as an Ad Hoc Editorial Review Board Member, Journal of Systems Management of the Association for Systems Management, 1996–1997. Prof. Raghupathi has published 40 journal articles and written papers in refereed conference proceedings, abstracts in international conferences, book chapters, editorials, and reviews, including several in the healthcare IT field.
Siyanbola, W., Adeyeye, A., Olaopa, O., & Hassan, O. (2016). Science, technology and innovation indicators in policy-making: the Nigerian experience. Palgrave Communications. https://doi.org/10.1057/palcomms.206.15.
World Economic Forum (WEF). (2012). The global competitiveness report 2012–2013. http://www3.weforum.org/docs/WEF_GlobalCompetitivenessReport_2012-13.pdf, Retrieved Sep 29, 2017.
|
#!/usr/bin/env python
"""
Thorium
=======
Thorium is a project that combines various python modules and tools originally
sourced from Nukepedia. It provides a streamlined way of managing their
versions and customizing the installation. While thorium ships as a complete
package, individual submodules can be activated and deactivated via config
files or arguments passed to thorium.
## Installation
Before we get into installation, a quick warning. Thorium is made up of many
submodules that are designed and still released to work independent of
thorium. When thorium imports those modules, it imports them into the global
namespace so that Nuke can access the modules directly, without having to go
through the thorium namespace. It does this by directly accessing and importing
straight into the `__builtin__` namespace. This is normally not recommended.
While every effort has been made to ensure that these submodules are named
uniquely, the python namespace can get very tricky and managers of facility
installations should carefully compare the modules thorium is set to import
with any global facility modules, otherwise those facility modules will
be inaccessible from within Nuke.
Installation can be done via pip (`pip install thorium`), an rpm or by manually
placing the 'thorium' folder in your .nuke directory or anywhere else within
the Nuke python path.
Then, add the following lines to your 'init.py' file:
::
import thorium
thorium.run()
And the following lines to your 'menu.py' file:
::
import thorium
thorium.run_gui()
You can turn off the usage of specific modules by passing a dictionary with the
module name and a bool.
::
import thorium
thorium.run_gui({'animatedSnap3D': False})
Now `animatedSnap3D` will not load, and every other module will. You can
reverse this behavior by passing the `default` argument `False`, which will
cause all modules not specifically listed as True to not be loaded.
::
import thorium
thorium.run_gui({'animatedSnap3D': True}, default=False)
Now `animatedSnap3D` will be the ONLY module that loads- all others will not
load, since the default is False.
## Usage
After the run functions above have executed, each submodule will be available
in it's native namespace. Modules with menu items will appear in their correct
place, and the python commands will be available for use from anywhere in Nuke.
## Classes
GlobalInjector
Injects set attributes directly into the global namespace. Thorium
uses this to import modules into '__builtin__'
## Public Functions
run()
Imports and runs the thorium submodules that should be available to
nuke and scripts at all times.
run_gui()
Imports and runs the thorium submodules that are only needed to user
interaction in the GUI.
## License
The MIT License (MIT)
Thorium
Copyright (c) 2014 Sean Wallitsch
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
"""
# =============================================================================
# GLOBALS
# =============================================================================
__author__ = "Sean Wallitsch"
__author_email__ = "sean@grenadehop.com"
__copyright__ = "Copyright 2014, Sean Wallitsch"
__credits__ = [
"Ivan Busquets",
"Philippe Huberdeau",
"Alexey Kuchinski",
"Frank Rueter",
"Sean Wallitsch",
]
__license__ = "MIT"
__version__ = "0.1b5"
__maintainer__ = "Sean Wallitsch"
__maintainer_email__ = "sean@grenadehop.com"
__module_name__ = "thorium"
__short_desc__ = "Combines and manages many Nuke python packages"
__status__ = "Development"
__url__ = "https://github.com/ThoriumGroup/thorium"
# =============================================================================
# EXPORTS
# =============================================================================
__all__ = [
'run',
'run_gui'
]
# =============================================================================
# PRIVATE FUNCTIONS
# =============================================================================
def _importer(module):
"""Imports and returns the given string as a module"""
return __import__(module, globals())
# =============================================================================
# CLASSES
# =============================================================================
class GlobalInjector(object):
"""Inject into the global namespace of "__builtin__"
Assigning to variables declared global in a function, injects them only
into the module's global namespace.
>>> global_namespace = sys.modules['__builtin__'].__dict__
>>> #would need
>>> global_namespace['aname'] = 'avalue'
>>> #With
>>> global_namespace = GlobalInjector()
>>> #one can do
>>> global_namespace.bname = 'bvalue'
>>> #reading from it is simply
>>> bname
bvalue
Class is from the following stackoverflow:
http://stackoverflow.com/questions/11813287/insert-variable-into-global-namespace-from-within-a-function
"""
def __init__(self):
import sys
self.__dict__['modules'] = []
self.__dict__['builtin'] = sys.modules['__builtin__'].__dict__
def __setattr__(self, name, value):
"""Adds an object to the __builtin__ namespace under name.
While this can be used to inject any object into the __builtin__
namespace, it's particularly useful for importing.
>>> global_namespace = GlobalInjector()
>>> global_namespace.random = __import__("random", globals())
>>> random.randint(0, 100)
67
`random` has now been imported into the global namespace. This works
even when global_namespace is within a local scope.
Args:
name : (str)
The variable name the module should be added under.
value : (<module>|any other object)
The python object to be referenced by name.
Returns:
None
Raises:
N/A
"""
self.builtin[name] = value
self.modules.append(name)
def reset(self):
""" Removes the objects that GlobalInjector has placed in the namespace
Note that when used for imported modules, this does not reload, or
uncache the modules.
This is mostly useful for testing.
Args:
N/A
Returns:
None
Raises:
N/A
"""
for module in self.modules:
if module in self.builtin:
del(self.builtin[module])
self.modules = []
# =============================================================================
# PUBLIC FUNCTIONS
# =============================================================================
def run(modules=None, default=True):
"""Imports and runs the submodules that must be available at all times"""
global_namespace = GlobalInjector()
if not modules:
modules = {}
pass
# =============================================================================
def run_gui(modules=None, default=True, menu_name='Thorium'):
"""Imports and runs gui only submodules"""
global_namespace = GlobalInjector()
if not modules:
modules = {}
if modules.get('animatedSnap3D', default):
global_namespace.animatedSnap3D = _importer('animatedSnap3D')
animatedSnap3D.run()
if modules.get('cardToTrack', default):
global_namespace.cardToTrack = _importer('cardToTrack')
cardToTrack.run(menu=menu_name)
if modules.get('iconPanel', default):
global_namespace.iconPanel = _importer('iconPanel')
iconPanel.run()
if modules.get('Keying', default):
global_namespace.keying = _importer('keying')
keying.run()
if modules.get('viewerSync', default):
global_namespace.viewerSync = _importer('viewerSync')
viewerSync.run()
|
The Surveyor 3 camera was easy pickings and brought back to Earth under sterile conditions by the Apollo 12 crew.
You will find in-depth insight into your total compatibility, numerology for birthday connections, numerology for birthday unique purpose (and lessons) of this relationship and it's long-term potential by reading your in-depth soul mate synergy report.
Every one is unique, but the principle of name governs how we bring this uniqueness into expression.
If you have a tie between three or more numbers that appear most frequently, you do not have an intensity number.
When once leaves are traced, we record the original Tamil prediction, as it is, in the Note Book.
This can make them less committed than some may like and can see the number 5 flitting from situation to situation, and relationship to relationship.
Actually My name number is 23. and I had all the good luck in my studies(mechanical engineering-80%) till May 2014.
With a numerology love match report you will get detailed information about you and your partner to help you navigate through the ups and downs of your relationship, and avoid any possible pitfalls.
You love neatness and tidiness and prefer proper iscipline to lackadaisical have fantasies and lack a practical approach.
Sir , plz do post also about about planetary ralationships in islamic numerology , bcoz i have been searching a lot , (but only vedic and chaldean and astrology systems are on internet) I want to confirm from ilme - jafr , plz post any reference tables about ‘Planetary relationships with numbers, and their corresponding signs' and Planetry Harmonies and their corresponding alphabets.
|
# -*- coding: utf-8 -*-
from __future__ import division
from collections import Iterable
import itertools
from ..exceptions import ValidationError, ConversionError, ModelValidationError, StopValidation
from ..models import Model
from ..transforms import export_loop, EMPTY_LIST, EMPTY_DICT
from .base import BaseType
from six import iteritems
from six import string_types as basestring
from six import text_type as unicode
class MultiType(BaseType):
def validate(self, value):
"""Report dictionary of errors with lists of errors as values of each
key. Used by ModelType and ListType.
"""
errors = {}
for validator in self.validators:
try:
validator(value)
except ModelValidationError as exc:
errors.update(exc.messages)
except StopValidation as exc:
errors.update(exc.messages)
break
if errors:
raise ValidationError(errors)
return value
def export_loop(self, shape_instance, field_converter,
role=None, print_none=False):
raise NotImplementedError
def init_compound_field(self, field, compound_field, **kwargs):
"""
Some of non-BaseType fields requires `field` arg.
Not avoid name conflict, provide it as `compound_field`.
Example:
comments = ListType(DictType, compound_field=StringType)
"""
if compound_field:
field = field(field=compound_field, **kwargs)
else:
field = field(**kwargs)
return field
class ModelType(MultiType):
def __init__(self, model_class, **kwargs):
self.model_class = model_class
self.fields = self.model_class.fields
validators = kwargs.pop("validators", [])
self.strict = kwargs.pop("strict", True)
def validate_model(model_instance):
model_instance.validate()
return model_instance
super(ModelType, self).__init__(validators=[validate_model] + validators, **kwargs)
def __repr__(self):
return object.__repr__(self)[:-1] + ' for %s>' % self.model_class
def to_native(self, value, mapping=None, context=None):
# We have already checked if the field is required. If it is None it
# should continue being None
if mapping is None:
mapping = {}
if value is None:
return None
if isinstance(value, self.model_class):
return value
if not isinstance(value, dict):
raise ConversionError(
u'Please use a mapping for this field or {0} instance instead of {1}.'.format(
self.model_class.__name__,
type(value).__name__))
# partial submodels now available with import_data (ht ryanolson)
model = self.model_class()
return model.import_data(value, mapping=mapping, context=context,
strict=self.strict)
def to_primitive(self, model_instance, context=None):
primitive_data = {}
for field_name, field, value in model_instance.atoms():
serialized_name = field.serialized_name or field_name
if value is None and model_instance.allow_none(field):
primitive_data[serialized_name] = None
else:
primitive_data[serialized_name] = field.to_primitive(value,
context)
return primitive_data
def export_loop(self, model_instance, field_converter,
role=None, print_none=False):
"""
Calls the main `export_loop` implementation because they are both
supposed to operate on models.
"""
if isinstance(model_instance, self.model_class):
model_class = model_instance.__class__
else:
model_class = self.model_class
shaped = export_loop(model_class, model_instance,
field_converter,
role=role, print_none=print_none)
if shaped and len(shaped) == 0 and self.allow_none():
return shaped
elif shaped:
return shaped
elif print_none:
return shaped
class ListType(MultiType):
def __init__(self, field, min_size=None, max_size=None, **kwargs):
if not isinstance(field, BaseType):
compound_field = kwargs.pop('compound_field', None)
field = self.init_compound_field(field, compound_field, **kwargs)
self.field = field
self.min_size = min_size
self.max_size = max_size
validators = [self.check_length, self.validate_items] + kwargs.pop("validators", [])
super(ListType, self).__init__(validators=validators, **kwargs)
@property
def model_class(self):
return self.field.model_class
def _force_list(self, value):
if value is None or value == EMPTY_LIST:
return []
try:
if isinstance(value, basestring):
raise TypeError()
if isinstance(value, dict):
return [value[unicode(k)] for k in sorted(map(int, value.keys()))]
return list(value)
except TypeError:
return [value]
def to_native(self, value, context=None):
items = self._force_list(value)
return [self.field.to_native(item, context) for item in items]
def check_length(self, value):
list_length = len(value) if value else 0
if self.min_size is not None and list_length < self.min_size:
message = ({
True: u'Please provide at least %d item.',
False: u'Please provide at least %d items.',
}[self.min_size == 1]) % self.min_size
raise ValidationError(message)
if self.max_size is not None and list_length > self.max_size:
message = ({
True: u'Please provide no more than %d item.',
False: u'Please provide no more than %d items.',
}[self.max_size == 1]) % self.max_size
raise ValidationError(message)
def validate_items(self, items):
errors = []
for item in items:
try:
self.field.validate(item)
except ValidationError as exc:
errors.append(exc.messages)
if errors:
raise ValidationError(errors)
def to_primitive(self, value, context=None):
return [self.field.to_primitive(item, context) for item in value]
def export_loop(self, list_instance, field_converter,
role=None, print_none=False):
"""Loops over each item in the model and applies either the field
transform or the multitype transform. Essentially functions the same
as `transforms.export_loop`.
"""
data = []
for value in list_instance:
if hasattr(self.field, 'export_loop'):
shaped = self.field.export_loop(value, field_converter,
role=role)
feels_empty = shaped and len(shaped) == 0
else:
shaped = field_converter(self.field, value)
feels_empty = shaped is None
# Print if we want empty or found a value
if feels_empty and self.field.allow_none():
data.append(shaped)
elif shaped is not None:
data.append(shaped)
elif print_none:
data.append(shaped)
# Return data if the list contains anything
if len(data) > 0:
return data
elif len(data) == 0 and self.allow_none():
return data
elif print_none:
return data
class DictType(MultiType):
def __init__(self, field, coerce_key=None, **kwargs):
if not isinstance(field, BaseType):
compound_field = kwargs.pop('compound_field', None)
field = self.init_compound_field(field, compound_field, **kwargs)
self.coerce_key = coerce_key or unicode
self.field = field
validators = [self.validate_items] + kwargs.pop("validators", [])
super(DictType, self).__init__(validators=validators, **kwargs)
@property
def model_class(self):
return self.field.model_class
def to_native(self, value, safe=False, context=None):
if value == EMPTY_DICT:
value = {}
value = value or {}
if not isinstance(value, dict):
raise ValidationError(u'Only dictionaries may be used in a DictType')
return dict((self.coerce_key(k), self.field.to_native(v, context))
for k, v in iteritems(value))
def validate_items(self, items):
errors = {}
for key, value in iteritems(items):
try:
self.field.validate(value)
except ValidationError as exc:
errors[key] = exc
if errors:
raise ValidationError(errors)
def to_primitive(self, value, context=None):
return dict((unicode(k), self.field.to_primitive(v, context))
for k, v in iteritems(value))
def export_loop(self, dict_instance, field_converter,
role=None, print_none=False):
"""Loops over each item in the model and applies either the field
transform or the multitype transform. Essentially functions the same
as `transforms.export_loop`.
"""
data = {}
for key, value in iteritems(dict_instance):
if hasattr(self.field, 'export_loop'):
shaped = self.field.export_loop(value, field_converter,
role=role)
feels_empty = shaped and len(shaped) == 0
else:
shaped = field_converter(self.field, value)
feels_empty = shaped is None
if feels_empty and self.field.allow_none():
data[key] = shaped
elif shaped is not None:
data[key] = shaped
elif print_none:
data[key] = shaped
if len(data) > 0:
return data
elif len(data) == 0 and self.allow_none():
return data
elif print_none:
return data
class PolyModelType(MultiType):
def __init__(self, model_classes, **kwargs):
if isinstance(model_classes, type) and issubclass(model_classes, Model):
self.model_classes = (model_classes,)
allow_subclasses = True
elif isinstance(model_classes, Iterable) \
and not isinstance(model_classes, basestring):
self.model_classes = tuple(model_classes)
allow_subclasses = False
else:
raise Exception("The first argument to PolyModelType.__init__() "
"must be a model or an iterable.")
validators = kwargs.pop("validators", [])
self.strict = kwargs.pop("strict", True)
self.claim_function = kwargs.pop("claim_function", None)
self.allow_subclasses = kwargs.pop("allow_subclasses", allow_subclasses)
def validate_model(model_instance):
model_instance.validate()
return model_instance
MultiType.__init__(self, validators=[validate_model] + validators, **kwargs)
def __repr__(self):
return object.__repr__(self)[:-1] + ' for %s>' % str(self.model_classes)
def is_allowed_model(self, model_instance):
if self.allow_subclasses:
if isinstance(model_instance, self.model_classes):
return True
else:
if model_instance.__class__ in self.model_classes:
return True
return False
def to_native(self, value, mapping=None, context=None):
if mapping is None:
mapping = {}
if value is None:
return None
if self.is_allowed_model(value):
return value
if not isinstance(value, dict):
if len(self.model_classes) > 1:
instanceof_msg = 'one of: {}'.format(', '.join(
cls.__name__ for cls in self.model_classes))
else:
instanceof_msg = self.model_classes[0].__name__
raise ConversionError(u'Please use a mapping for this field or '
'an instance of {}'.format(instanceof_msg))
model_class = self.find_model(value)
model = model_class()
return model.import_data(value, mapping=mapping, context=context,
strict=self.strict)
def find_model(self, data):
"""Finds the intended type by consulting potential classes or `claim_function`."""
chosen_class = None
if self.claim_function:
chosen_class = self.claim_function(self, data)
else:
candidates = self.model_classes
if self.allow_subclasses:
candidates = itertools.chain.from_iterable(
([m] + m._subclasses for m in candidates))
fallback = None
matching_classes = []
for cls in candidates:
match = None
if '_claim_polymorphic' in cls.__dict__:
match = cls._claim_polymorphic(data)
elif not fallback: # The first model that doesn't define the hook
fallback = cls # can be used as a default if there's no match.
if match:
matching_classes.append(cls)
if not matching_classes and fallback:
chosen_class = fallback
elif len(matching_classes) == 1:
chosen_class = matching_classes[0]
else:
raise Exception("Got ambiguous input for polymorphic field")
if chosen_class:
return chosen_class
else:
raise Exception("Input for polymorphic field did not match any model")
def export_loop(self, model_instance, field_converter,
role=None, print_none=False):
model_class = model_instance.__class__
if not self.is_allowed_model(model_instance):
raise Exception("Cannot export: {} is not an allowed type".format(model_class))
shaped = export_loop(model_class, model_instance,
field_converter,
role=role, print_none=print_none)
if shaped and len(shaped) == 0 and self.allow_none():
return shaped
elif shaped:
return shaped
elif print_none:
return shaped
|
It’s both scenery and the current political state of the country that makes this trail ride at the remarkable ranch, quite possibly, the wildest ride in the West! Rancho de la Osa houses Arizona’s oldest continually used building, constructed in an Indian village around 1720 by Jesuit missionaries.
For the second wildest ride, the DRA is going to take you to our northern border with Canada. At Bar W Ranch, there is a full-day horseback trail ride that will take guests to an extremely wild site. Throughout the day the terrain changes, and riders can walk, trot, or lope through open meadows and lush forests before exploring Lake Koocanusa. Riders think they’ve entered a postcard with a majestic mountain backdrop and long sandy beaches as far as the eye can see. Lake Koocanusa signifies the cooperation between two friendly neighbors, “Koocanusa” is an abbreviation of KOOtenai River and the first three letters of CANada and USA.
The ranch will also take you on trail rides through Taylor Creek Canyon, Beaver Canyon, Whitewater Canyon, Hoyt Canyon, just some of the spectacular canyons that are truly awe-inspiring and jaw dropping. Gaze up at canyon walls as you ride through creeks fed by warm spring waters.
Click here to see more about dude ranch vacations!
|
# -*- coding: cp1252 -*-
##
# <p> Portions copyright © 2005-2006 Stephen John Machin, Lingfo Pty Ltd</p>
# <p>This module is part of the xlrd package, which is released under a BSD-style licence.</p>
##
# 2007-04-22 SJM Remove experimental "trimming" facility.
from biffh import *
from timemachine import *
from struct import unpack
from formula import dump_formula, decompile_formula, rangename2d
from formatting import nearest_colour_index
import time
DEBUG = 0
_WINDOW2_options = (
# Attribute names and initial values to use in case
# a WINDOW2 record is not written.
("show_formulas", 0),
("show_grid_lines", 1),
("show_sheet_headers", 1),
("panes_are_frozen", 0),
("show_zero_values", 1),
("automatic_grid_line_colour", 1),
("columns_from_right_to_left", 0),
("show_outline_symbols", 1),
("remove_splits_if_pane_freeze_is_removed", 0),
("sheet_selected", 0),
# "sheet_visible" appears to be merely a clone of "sheet_selected".
# The real thing is the visibility attribute from the BOUNDSHEET record.
("sheet_visible", 0),
("show_in_page_break_preview", 0),
)
##
# <p>Contains the data for one worksheet.</p>
#
# <p>In the cell access functions, "rowx" is a row index, counting from zero, and "colx" is a
# column index, counting from zero.
# Negative values for row/column indexes and slice positions are supported in the expected fashion.</p>
#
# <p>For information about cell types and cell values, refer to the documentation of the Cell class.</p>
#
# <p>WARNING: You don't call this class yourself. You access Sheet objects via the Book object that
# was returned when you called xlrd.open_workbook("myfile.xls").</p>
class Sheet(BaseObject):
##
# Name of sheet.
name = ''
##
# Number of rows in sheet. A row index is in range(thesheet.nrows).
nrows = 0
##
# Number of columns in sheet. A column index is in range(thesheet.ncols).
ncols = 0
##
# The map from a column index to a Colinfo object. Often there is an entry
# in COLINFO records for all column indexes in range(257).
# Note that xlrd ignores the entry for the non-existent
# 257th column. On the other hand, there may be no entry for unused columns.
# <br /> -- New in version 0.6.1
colinfo_map = {}
##
# The map from a row index to a Rowinfo object. Note that it is possible
# to have missing entries -- at least one source of XLS files doesn't
# bother writing ROW records.
# <br /> -- New in version 0.6.1
rowinfo_map = {}
##
# List of address ranges of cells containing column labels.
# These are set up in Excel by Insert > Name > Labels > Columns.
# <br> -- New in version 0.6.0
# <br>How to deconstruct the list:
# <pre>
# for crange in thesheet.col_label_ranges:
# rlo, rhi, clo, chi = crange
# for rx in xrange(rlo, rhi):
# for cx in xrange(clo, chi):
# print "Column label at (rowx=%d, colx=%d) is %r" \
# (rx, cx, thesheet.cell_value(rx, cx))
# </pre>
col_label_ranges = []
##
# List of address ranges of cells containing row labels.
# For more details, see <i>col_label_ranges</i> above.
# <br> -- New in version 0.6.0
row_label_ranges = []
##
# List of address ranges of cells which have been merged.
# These are set up in Excel by Format > Cells > Alignment, then ticking
# the "Merge cells" box.
# <br> -- New in version 0.6.1
# <br>How to deconstruct the list:
# <pre>
# for crange in thesheet.merged_cells:
# rlo, rhi, clo, chi = crange
# for rowx in xrange(rlo, rhi):
# for colx in xrange(clo, chi):
# # cell (rlo, clo) (the top left one) will carry the data
# # and formatting info; the remainder will be recorded as
# # blank cells, but a renderer will apply the formatting info
# # for the top left cell (e.g. border, pattern) to all cells in
# # the range.
# </pre>
merged_cells = []
##
# Default column width from DEFCOLWIDTH record, else None.
# From the OOo docs:<br />
# """Column width in characters, using the width of the zero character
# from default font (first FONT record in the file). Excel adds some
# extra space to the default width, depending on the default font and
# default font size. The algorithm how to exactly calculate the resulting
# column width is not known.<br />
# Example: The default width of 8 set in this record results in a column
# width of 8.43 using Arial font with a size of 10 points."""<br />
# For the default hierarchy, refer to the Colinfo class above.
# <br /> -- New in version 0.6.1
defcolwidth = None
##
# Default column width from STANDARDWIDTH record, else None.
# From the OOo docs:<br />
# """Default width of the columns in 1/256 of the width of the zero
# character, using default font (first FONT record in the file)."""<br />
# For the default hierarchy, refer to the Colinfo class above.
# <br /> -- New in version 0.6.1
standardwidth = None
##
# Default value to be used for a row if there is
# no ROW record for that row.
# From the <i>optional</i> DEFAULTROWHEIGHT record.
default_row_height = None
##
# Default value to be used for a row if there is
# no ROW record for that row.
# From the <i>optional</i> DEFAULTROWHEIGHT record.
default_row_height_mismatch = None
##
# Default value to be used for a row if there is
# no ROW record for that row.
# From the <i>optional</i> DEFAULTROWHEIGHT record.
default_row_hidden = None
##
# Default value to be used for a row if there is
# no ROW record for that row.
# From the <i>optional</i> DEFAULTROWHEIGHT record.
default_additional_space_above = None
##
# Default value to be used for a row if there is
# no ROW record for that row.
# From the <i>optional</i> DEFAULTROWHEIGHT record.
default_additional_space_below = None
##
# Visibility of the sheet. 0 = visible, 1 = hidden (can be unhidden
# by user -- Format/Sheet/Unhide), 2 = "very hidden" (can be unhidden
# only by VBA macro).
visibility = 0
##
# A 256-element tuple corresponding to the contents of the GCW record for this sheet.
# If no such record, treat as all bits zero.
# Applies to BIFF4-7 only. See docs of Colinfo class for discussion.
gcw = (0, ) * 256
def __init__(self, book, position, name, number):
self.book = book
self.biff_version = book.biff_version
self._position = position
self.logfile = book.logfile
self.pickleable = book.pickleable
self.dont_use_array = not(array_array and (CAN_PICKLE_ARRAY or not book.pickleable))
self.name = name
self.number = number
self.verbosity = book.verbosity
self.formatting_info = book.formatting_info
self._xf_index_to_xl_type_map = book._xf_index_to_xl_type_map
self.nrows = 0 # actual, including possibly empty cells
self.ncols = 0
self._maxdatarowx = -1 # highest rowx containing a non-empty cell
self._maxdatacolx = -1 # highest colx containing a non-empty cell
self._dimnrows = 0 # as per DIMENSIONS record
self._dimncols = 0
self._cell_values = []
self._cell_types = []
self._cell_xf_indexes = []
self._need_fix_ragged_rows = 0
self.defcolwidth = None
self.standardwidth = None
self.default_row_height = None
self.default_row_height_mismatch = 0
self.default_row_hidden = 0
self.default_additional_space_above = 0
self.default_additional_space_below = 0
self.colinfo_map = {}
self.rowinfo_map = {}
self.col_label_ranges = []
self.row_label_ranges = []
self.merged_cells = []
self._xf_index_stats = [0, 0, 0, 0]
self.visibility = book._sheet_visibility[number] # from BOUNDSHEET record
for attr, defval in _WINDOW2_options:
setattr(self, attr, defval)
self.first_visible_rowx = 0
self.first_visible_colx = 0
self.gridline_colour_index = 0x40
self.gridline_colour_rgb = None # pre-BIFF8
self.cached_page_break_preview_mag_factor = 0
self.cached_normal_view_mag_factor = 0
#### Don't initialise this here, use class attribute initialisation.
#### self.gcw = (0, ) * 256 ####
if self.biff_version >= 80:
self.utter_max_rows = 65536
else:
self.utter_max_rows = 16384
##
# Cell object in the given row and column.
def cell(self, rowx, colx):
if self.formatting_info:
xfx = self.cell_xf_index(rowx, colx)
else:
xfx = None
return Cell(
self._cell_types[rowx][colx],
self._cell_values[rowx][colx],
xfx,
)
##
# Value of the cell in the given row and column.
def cell_value(self, rowx, colx):
return self._cell_values[rowx][colx]
##
# Type of the cell in the given row and column.
# Refer to the documentation of the Cell class.
def cell_type(self, rowx, colx):
return self._cell_types[rowx][colx]
##
# XF index of the cell in the given row and column.
# This is an index into Book.raw_xf_list and Book.computed_xf_list.
# <br /> -- New in version 0.6.1
def cell_xf_index(self, rowx, colx):
self.req_fmt_info()
xfx = self._cell_xf_indexes[rowx][colx]
if xfx > -1:
self._xf_index_stats[0] += 1
return xfx
# Check for a row xf_index
try:
xfx = self.rowinfo_map[rowx].xf_index
if xfx > -1:
self._xf_index_stats[1] += 1
return xfx
except KeyError:
pass
# Check for a column xf_index
try:
xfx = self.colinfo_map[colx].xf_index
assert xfx > -1
self._xf_index_stats[2] += 1
return xfx
except KeyError:
# If all else fails, 15 is used as hardwired global default xf_index.
self._xf_index_stats[3] += 1
return 15
##
# Returns a sequence of the Cell objects in the given row.
def row(self, rowx):
return [
self.cell(rowx, colx)
for colx in xrange(self.ncols)
]
##
# Returns a slice of the types
# of the cells in the given row.
def row_types(self, rowx, start_colx=0, end_colx=None):
if end_colx is None:
return self._cell_types[rowx][start_colx:]
return self._cell_types[rowx][start_colx:end_colx]
##
# Returns a slice of the values
# of the cells in the given row.
def row_values(self, rowx, start_colx=0, end_colx=None):
if end_colx is None:
return self._cell_values[rowx][start_colx:]
return self._cell_values[rowx][start_colx:end_colx]
##
# Returns a slice of the Cell objects in the given row.
def row_slice(self, rowx, start_colx=0, end_colx=None):
nc = self.ncols
if start_colx < 0:
start_colx += nc
if start_colx < 0:
start_colx = 0
if end_colx is None or end_colx > nc:
end_colx = nc
elif end_colx < 0:
end_colx += nc
return [
self.cell(rowx, colx)
for colx in xrange(start_colx, end_colx)
]
##
# Returns a slice of the Cell objects in the given column.
def col_slice(self, colx, start_rowx=0, end_rowx=None):
nr = self.nrows
if start_rowx < 0:
start_rowx += nr
if start_rowx < 0:
start_rowx = 0
if end_rowx is None or end_rowx > nr:
end_rowx = nr
elif end_rowx < 0:
end_rowx += nr
return [
self.cell(rowx, colx)
for rowx in xrange(start_rowx, end_rowx)
]
##
# Returns a slice of the values of the cells in the given column.
def col_values(self, colx, start_rowx=0, end_rowx=None):
nr = self.nrows
if start_rowx < 0:
start_rowx += nr
if start_rowx < 0:
start_rowx = 0
if end_rowx is None or end_rowx > nr:
end_rowx = nr
elif end_rowx < 0:
end_rowx += nr
return [
self._cell_values[rowx][colx]
for rowx in xrange(start_rowx, end_rowx)
]
##
# Returns a slice of the types of the cells in the given column.
def col_types(self, colx, start_rowx=0, end_rowx=None):
nr = self.nrows
if start_rowx < 0:
start_rowx += nr
if start_rowx < 0:
start_rowx = 0
if end_rowx is None or end_rowx > nr:
end_rowx = nr
elif end_rowx < 0:
end_rowx += nr
return [
self._cell_types[rowx][colx]
for rowx in xrange(start_rowx, end_rowx)
]
##
# Returns a sequence of the Cell objects in the given column.
def col(self, colx):
return self.col_slice(colx)
# Above two lines just for the docs. Here's the real McCoy:
col = col_slice
# === Following methods are used in building the worksheet.
# === They are not part of the API.
def extend_cells(self, nr, nc):
# print "extend_cells_2", self.nrows, self.ncols, nr, nc
assert 1 <= nc <= 256
assert 1 <= nr <= self.utter_max_rows
if nr <= self.nrows:
# New cell is in an existing row, so extend that row (if necessary).
# Note that nr < self.nrows means that the cell data
# is not in ascending row order!!
self._need_fix_ragged_rows = 1
nrx = nr - 1
trow = self._cell_types[nrx]
tlen = len(trow)
nextra = max(nc, self.ncols) - tlen
if nextra > 0:
xce = XL_CELL_EMPTY
if self.dont_use_array:
trow.extend([xce] * nextra)
if self.formatting_info:
self._cell_xf_indexes[nrx].extend([-1] * nextra)
else:
aa = array_array
trow.extend(aa('B', [xce]) * nextra)
if self.formatting_info:
self._cell_xf_indexes[nrx].extend(aa('h', [-1]) * nextra)
self._cell_values[nrx].extend([''] * nextra)
if nc > self.ncols:
self.ncols = nc
self._need_fix_ragged_rows = 1
if nr > self.nrows:
scta = self._cell_types.append
scva = self._cell_values.append
scxa = self._cell_xf_indexes.append
fmt_info = self.formatting_info
xce = XL_CELL_EMPTY
nc = self.ncols
if self.dont_use_array:
for _unused in xrange(self.nrows, nr):
scta([xce] * nc)
scva([''] * nc)
if fmt_info:
scxa([-1] * nc)
else:
aa = array_array
for _unused in xrange(self.nrows, nr):
scta(aa('B', [xce]) * nc)
scva([''] * nc)
if fmt_info:
scxa(aa('h', [-1]) * nc)
self.nrows = nr
def fix_ragged_rows(self):
t0 = time.time()
ncols = self.ncols
xce = XL_CELL_EMPTY
aa = array_array
s_cell_types = self._cell_types
s_cell_values = self._cell_values
s_cell_xf_indexes = self._cell_xf_indexes
s_dont_use_array = self.dont_use_array
s_fmt_info = self.formatting_info
totrowlen = 0
for rowx in xrange(self.nrows):
trow = s_cell_types[rowx]
rlen = len(trow)
totrowlen += rlen
nextra = ncols - rlen
if nextra > 0:
s_cell_values[rowx][rlen:] = [''] * nextra
if s_dont_use_array:
trow[rlen:] = [xce] * nextra
if s_fmt_info:
s_cell_xf_indexes[rowx][rlen:] = [-1] * nextra
else:
trow.extend(aa('B', [xce]) * nextra)
if s_fmt_info:
s_cell_xf_indexes[rowx][rlen:] = aa('h', [-1]) * nextra
self._fix_ragged_rows_time = time.time() - t0
if 0 and self.nrows:
avgrowlen = float(totrowlen) / self.nrows
print >> self.logfile, \
"sheet %d: avg row len %.1f; max row len %d" \
% (self.number, avgrowlen, self.ncols)
def tidy_dimensions(self):
if self.verbosity >= 3:
fprintf(self.logfile,
"tidy_dimensions: nrows=%d ncols=%d _need_fix_ragged_rows=%d\n",
self.nrows, self.ncols, self._need_fix_ragged_rows,
)
if 1 and self.merged_cells:
nr = nc = 0
umaxrows = self.utter_max_rows
for crange in self.merged_cells:
rlo, rhi, clo, chi = crange
if not (0 <= rlo < rhi <= umaxrows) \
or not (0 <= clo < chi <= 256):
fprintf(self.logfile,
"*** WARNING: sheet #%d (%r), MERGEDCELLS bad range %r\n",
self.number, self.name, crange)
if rhi > nr: nr = rhi
if chi > nc: nc = chi
self.extend_cells(nr, nc)
if self.verbosity >= 1 \
and (self.nrows != self._dimnrows or self.ncols != self._dimncols):
fprintf(self.logfile,
"NOTE *** sheet %d (%r): DIMENSIONS R,C = %d,%d should be %d,%d\n",
self.number,
self.name,
self._dimnrows,
self._dimncols,
self.nrows,
self.ncols,
)
if self._need_fix_ragged_rows:
self.fix_ragged_rows()
def put_cell(self, rowx, colx, ctype, value, xf_index):
try:
self._cell_types[rowx][colx] = ctype
self._cell_values[rowx][colx] = value
if self.formatting_info:
self._cell_xf_indexes[rowx][colx] = xf_index
except IndexError:
# print >> self.logfile, "put_cell extending", rowx, colx
self.extend_cells(rowx+1, colx+1)
try:
self._cell_types[rowx][colx] = ctype
self._cell_values[rowx][colx] = value
if self.formatting_info:
self._cell_xf_indexes[rowx][colx] = xf_index
except:
print >> self.logfile, "put_cell", rowx, colx
raise
except:
print >> self.logfile, "put_cell", rowx, colx
raise
def put_blank_cell(self, rowx, colx, xf_index):
# This is used for cells from BLANK and MULBLANK records
ctype = XL_CELL_BLANK
value = ''
try:
self._cell_types[rowx][colx] = ctype
self._cell_values[rowx][colx] = value
self._cell_xf_indexes[rowx][colx] = xf_index
except IndexError:
# print >> self.logfile, "put_cell extending", rowx, colx
self.extend_cells(rowx+1, colx+1)
try:
self._cell_types[rowx][colx] = ctype
self._cell_values[rowx][colx] = value
self._cell_xf_indexes[rowx][colx] = xf_index
except:
print >> self.logfile, "put_cell", rowx, colx
raise
except:
print >> self.logfile, "put_cell", rowx, colx
raise
def put_number_cell(self, rowx, colx, value, xf_index):
ctype = self._xf_index_to_xl_type_map[xf_index]
try:
self._cell_types[rowx][colx] = ctype
self._cell_values[rowx][colx] = value
if self.formatting_info:
self._cell_xf_indexes[rowx][colx] = xf_index
except IndexError:
# print >> self.logfile, "put_number_cell extending", rowx, colx
self.extend_cells(rowx+1, colx+1)
try:
self._cell_types[rowx][colx] = ctype
self._cell_values[rowx][colx] = value
if self.formatting_info:
self._cell_xf_indexes[rowx][colx] = xf_index
except:
print >> self.logfile, "put_number_cell", rowx, colx
raise
except:
print >> self.logfile, "put_number_cell", rowx, colx
raise
# === Methods after this line neither know nor care about how cells are stored.
def read(self, bk):
global rc_stats
DEBUG = 0
blah = DEBUG or self.verbosity >= 2
blah_rows = DEBUG or self.verbosity >= 4
blah_formulas = 0 and blah
oldpos = bk._position
bk.position(self._position)
XL_SHRFMLA_ETC_ETC = (
XL_SHRFMLA, XL_ARRAY, XL_TABLEOP, XL_TABLEOP2,
XL_ARRAY2, XL_TABLEOP_B2,
)
self_put_number_cell = self.put_number_cell
self_put_cell = self.put_cell
self_put_blank_cell = self.put_blank_cell
local_unpack = unpack
bk_get_record_parts = bk.get_record_parts
bv = self.biff_version
fmt_info = self.formatting_info
eof_found = 0
while 1:
# if DEBUG: print "SHEET.READ: about to read from position %d" % bk._position
rc, data_len, data = bk_get_record_parts()
# if rc in rc_stats:
# rc_stats[rc] += 1
# else:
# rc_stats[rc] = 1
# if DEBUG: print "SHEET.READ: op 0x%04x, %d bytes %r" % (rc, data_len, data)
if rc == XL_NUMBER:
rowx, colx, xf_index, d = local_unpack('<HHHd', data)
# if xf_index == 0:
# fprintf(self.logfile,
# "NUMBER: r=%d c=%d xfx=%d %f\n", rowx, colx, xf_index, d)
self_put_number_cell(rowx, colx, d, xf_index)
elif rc == XL_LABELSST:
rowx, colx, xf_index, sstindex = local_unpack('<HHHi', data)
# print "LABELSST", rowx, colx, sstindex, bk._sharedstrings[sstindex]
self_put_cell(rowx, colx, XL_CELL_TEXT, bk._sharedstrings[sstindex], xf_index)
elif rc == XL_LABEL or rc == XL_RSTRING:
# RSTRING has extra richtext info at the end, but we ignore it.
rowx, colx, xf_index = local_unpack('<HHH', data[0:6])
if bv < BIFF_FIRST_UNICODE:
strg = unpack_string(data, 6, bk.encoding, lenlen=2)
else:
strg = unpack_unicode(data, 6, lenlen=2)
self_put_cell(rowx, colx, XL_CELL_TEXT, strg, xf_index)
elif rc == XL_RK:
rowx, colx, xf_index = local_unpack('<HHH', data[:6])
d = unpack_RK(data[6:10])
self_put_number_cell(rowx, colx, d, xf_index)
elif rc == XL_MULRK:
mulrk_row, mulrk_first = local_unpack('<HH', data[0:4])
mulrk_last, = local_unpack('<H', data[-2:])
pos = 4
for colx in xrange(mulrk_first, mulrk_last+1):
xf_index, = local_unpack('<H', data[pos:pos+2])
d = unpack_RK(data[pos+2:pos+6])
pos += 6
self_put_number_cell(mulrk_row, colx, d, xf_index)
elif rc == XL_ROW:
# Version 0.6.0a3: ROW records are just not worth using (for memory allocation).
# Version 0.6.1: now used for formatting info.
if not fmt_info: continue
rowx, bits1, bits2 = local_unpack('<H4xH4xi', data[0:16])
if not(0 <= rowx < self.utter_max_rows):
print >> self.logfile, \
"*** NOTE: ROW record has row index %d; " \
"should have 0 <= rowx < %d -- record ignored!" \
% (rowx, self.utter_max_rows)
continue
r = Rowinfo()
# Using upkbits() is far too slow on a file
# with 30 sheets each with 10K rows :-(
# upkbits(r, bits1, (
# ( 0, 0x7FFF, 'height'),
# (15, 0x8000, 'has_default_height'),
# ))
# upkbits(r, bits2, (
# ( 0, 0x00000007, 'outline_level'),
# ( 4, 0x00000010, 'outline_group_starts_ends'),
# ( 5, 0x00000020, 'hidden'),
# ( 6, 0x00000040, 'height_mismatch'),
# ( 7, 0x00000080, 'has_default_xf_index'),
# (16, 0x0FFF0000, 'xf_index'),
# (28, 0x10000000, 'additional_space_above'),
# (29, 0x20000000, 'additional_space_below'),
# ))
# So:
r.height = bits1 & 0x7fff
r.has_default_height = (bits1 >> 15) & 1
r.outline_level = bits2 & 7
r.outline_group_starts_ends = (bits2 >> 4) & 1
r.hidden = (bits2 >> 5) & 1
r.height_mismatch = (bits2 >> 6) & 1
r.has_default_xf_index = (bits2 >> 7) & 1
r.xf_index = (bits2 >> 16) & 0xfff
r.additional_space_above = (bits2 >> 28) & 1
r.additional_space_below = (bits2 >> 29) & 1
if not r.has_default_xf_index:
r.xf_index = -1
self.rowinfo_map[rowx] = r
if 0 and r.xf_index > -1:
fprintf(self.logfile,
"**ROW %d %d %d\n",
self.number, rowx, r.xf_index)
if blah_rows:
print >> self.logfile, 'ROW', rowx, bits1, bits2
r.dump(self.logfile,
header="--- sh #%d, rowx=%d ---" % (self.number, rowx))
elif rc & 0xff == XL_FORMULA: # 06, 0206, 0406
# DEBUG = 1
# if DEBUG: print "FORMULA: rc: 0x%04x data: %r" % (rc, data)
rowx, colx, xf_index, flags = local_unpack('<HHHxxxxxxxxH', data[0:16])
if blah_formulas: # testing formula dumper
fprintf(self.logfile, "FORMULA: rowx=%d colx=%d\n", rowx, colx)
fmlalen = local_unpack("<H", data[20:22])[0]
decompile_formula(bk, data[22:], fmlalen,
reldelta=0, browx=rowx, bcolx=colx, blah=1)
if data[12] == '\xff' and data[13] == '\xff':
if data[6] == '\x00':
# need to read next record (STRING)
gotstring = 0
# if flags & 8:
if 1: # "flags & 8" applies only to SHRFMLA
# actually there's an optional SHRFMLA or ARRAY etc record to skip over
rc2, data2_len, data2 = bk.get_record_parts()
if rc2 == XL_STRING:
gotstring = 1
elif rc2 == XL_ARRAY:
row1x, rownx, col1x, colnx, array_flags, tokslen = \
local_unpack("<HHBBBxxxxxH", data2[:14])
if blah_formulas:
fprintf(self.logfile, "ARRAY: %d %d %d %d %d\n",
row1x, rownx, col1x, colnx, array_flags)
dump_formula(bk, data2[14:], tokslen, bv, reldelta=0, blah=1)
elif rc2 == XL_SHRFMLA:
row1x, rownx, col1x, colnx, nfmlas, tokslen = \
local_unpack("<HHBBxBH", data2[:10])
if blah_formulas:
fprintf(self.logfile, "SHRFMLA (sub): %d %d %d %d %d\n",
row1x, rownx, col1x, colnx, nfmlas)
decompile_formula(bk, data2[10:], tokslen, reldelta=1, blah=1)
elif rc2 not in XL_SHRFMLA_ETC_ETC:
raise XLRDError(
"Expected SHRFMLA, ARRAY, TABLEOP* or STRING record; found 0x%04x" % rc2)
# if DEBUG: print "gotstring:", gotstring
# now for the STRING record
if not gotstring:
rc2, _unused_len, data2 = bk.get_record_parts()
if rc2 != XL_STRING: raise XLRDError("Expected STRING record; found 0x%04x" % rc2)
# if DEBUG: print "STRING: data=%r BIFF=%d cp=%d" % (data2, self.biff_version, bk.encoding)
if self.biff_version < BIFF_FIRST_UNICODE:
strg = unpack_string(data2, 0, bk.encoding, lenlen=2)
else:
strg = unpack_unicode(data2, 0, lenlen=2)
self.put_cell(rowx, colx, XL_CELL_TEXT, strg, xf_index)
# if DEBUG: print "FORMULA strg %r" % strg
elif data[6] == '\x01':
# boolean formula result
value = ord(data[8])
self.put_cell(rowx, colx, XL_CELL_BOOLEAN, value, xf_index)
elif data[6] == '\x02':
# Error in cell
value = ord(data[8])
self.put_cell(rowx, colx, XL_CELL_ERROR, value, xf_index)
elif data[6] == '\x03':
# empty ... i.e. empty (zero-length) string, NOT an empty cell.
self.put_cell(rowx, colx, XL_CELL_TEXT, u"", xf_index)
else:
raise XLRDError("unexpected special case (0x%02x) in FORMULA" % ord(data[6]))
else:
# it is a number
d = local_unpack('<d', data[6:14])[0]
self_put_number_cell(rowx, colx, d, xf_index)
elif rc == XL_BOOLERR:
rowx, colx, xf_index, value, is_err = local_unpack('<HHHBB', data[:8])
# Note OOo Calc 2.0 writes 9-byte BOOLERR records.
# OOo docs say 8. Excel writes 8.
cellty = (XL_CELL_BOOLEAN, XL_CELL_ERROR)[is_err]
# if DEBUG: print "XL_BOOLERR", rowx, colx, xf_index, value, is_err
self.put_cell(rowx, colx, cellty, value, xf_index)
elif rc == XL_COLINFO:
if not fmt_info: continue
c = Colinfo()
first_colx, last_colx, c.width, c.xf_index, flags \
= local_unpack("<HHHHH", data[:10])
#### Colinfo.width is denominated in 256ths of a character,
#### *not* in characters.
if not(0 <= first_colx <= last_colx <= 256):
# Note: 256 instead of 255 is a common mistake.
# We silently ignore the non-existing 257th column in that case.
print >> self.logfile, \
"*** NOTE: COLINFO record has first col index %d, last %d; " \
"should have 0 <= first <= last <= 255 -- record ignored!" \
% (first_colx, last_colx)
del c
continue
upkbits(c, flags, (
( 0, 0x0001, 'hidden'),
( 1, 0x0002, 'bit1_flag'),
# *ALL* colinfos created by Excel in "default" cases are 0x0002!!
# Maybe it's "locked" by analogy with XFProtection data.
( 8, 0x0700, 'outline_level'),
(12, 0x1000, 'collapsed'),
))
for colx in xrange(first_colx, last_colx+1):
if colx > 255: break # Excel does 0 to 256 inclusive
self.colinfo_map[colx] = c
if 0:
fprintf(self.logfile,
"**COL %d %d %d\n",
self.number, colx, c.xf_index)
if blah:
fprintf(
self.logfile,
"COLINFO sheet #%d cols %d-%d: wid=%d xf_index=%d flags=0x%04x\n",
self.number, first_colx, last_colx, c.width, c.xf_index, flags,
)
c.dump(self.logfile, header='===')
elif rc == XL_DEFCOLWIDTH:
self.defcolwidth, = local_unpack("<H", data[:2])
if 0: print >> self.logfile, 'DEFCOLWIDTH', self.defcolwidth
elif rc == XL_STANDARDWIDTH:
if data_len != 2:
print >> self.logfile, '*** ERROR *** STANDARDWIDTH', data_len, repr(data)
self.standardwidth, = local_unpack("<H", data[:2])
if 0: print >> self.logfile, 'STANDARDWIDTH', self.standardwidth
elif rc == XL_GCW:
if not fmt_info: continue # useless w/o COLINFO
assert data_len == 34
assert data[0:2] == "\x20\x00"
iguff = unpack("<8i", data[2:34])
gcw = []
for bits in iguff:
for j in xrange(32):
gcw.append(bits & 1)
bits >>= 1
self.gcw = tuple(gcw)
if 0:
showgcw = "".join(map(lambda x: "F "[x], gcw)).rstrip().replace(' ', '.')
print "GCW:", showgcw
elif rc == XL_BLANK:
if not fmt_info: continue
rowx, colx, xf_index = local_unpack('<HHH', data[:6])
if 0: print >> self.logfile, "BLANK", rowx, colx, xf_index
self_put_blank_cell(rowx, colx, xf_index)
elif rc == XL_MULBLANK: # 00BE
if not fmt_info: continue
mul_row, mul_first = local_unpack('<HH', data[0:4])
mul_last, = local_unpack('<H', data[-2:])
if 0:
print >> self.logfile, "MULBLANK", mul_row, mul_first, mul_last
pos = 4
for colx in xrange(mul_first, mul_last+1):
xf_index, = local_unpack('<H', data[pos:pos+2])
pos += 2
self_put_blank_cell(mul_row, colx, xf_index)
elif rc == XL_DIMENSION or rc == XL_DIMENSION2:
# if data_len == 10:
# Was crashing on BIFF 4.0 file w/o the two trailing unused bytes.
# Reported by Ralph Heimburger.
if bv < 80:
dim_tuple = local_unpack('<HxxH', data[2:8])
else:
dim_tuple = local_unpack('<ixxH', data[4:12])
self.nrows, self.ncols = 0, 0
self._dimnrows, self._dimncols = dim_tuple
if not self.book._xf_epilogue_done:
# Needed for bv <= 40
self.book.xf_epilogue()
if blah:
fprintf(self.logfile,
"sheet %d(%r) DIMENSIONS: ncols=%d nrows=%d\n",
self.number, self.name, self._dimncols, self._dimnrows
)
elif rc == XL_EOF:
DEBUG = 0
if DEBUG: print >> self.logfile, "SHEET.READ: EOF"
eof_found = 1
break
elif rc == XL_OBJ:
bk.handle_obj(data)
elif rc in bofcodes: ##### EMBEDDED BOF #####
version, boftype = local_unpack('<HH', data[0:4])
if boftype != 0x20: # embedded chart
print >> self.logfile, \
"*** Unexpected embedded BOF (0x%04x) at offset %d: version=0x%04x type=0x%04x" \
% (rc, bk._position - data_len - 4, version, boftype)
while 1:
code, data_len, data = bk.get_record_parts()
if code == XL_EOF:
break
if DEBUG: print >> self.logfile, "---> found EOF"
elif rc == XL_COUNTRY:
bk.handle_country(data)
elif rc == XL_LABELRANGES:
pos = 0
pos = unpack_cell_range_address_list_update_pos(
self.row_label_ranges, data, pos, bv, addr_size=8,
)
pos = unpack_cell_range_address_list_update_pos(
self.col_label_ranges, data, pos, bv, addr_size=8,
)
assert pos == data_len
elif rc == XL_ARRAY:
row1x, rownx, col1x, colnx, array_flags, tokslen = \
local_unpack("<HHBBBxxxxxH", data[:14])
if blah_formulas:
print "ARRAY:", row1x, rownx, col1x, colnx, array_flags
dump_formula(bk, data[14:], tokslen, bv, reldelta=0, blah=1)
elif rc == XL_SHRFMLA:
row1x, rownx, col1x, colnx, nfmlas, tokslen = \
local_unpack("<HHBBxBH", data[:10])
if blah_formulas:
print "SHRFMLA (main):", row1x, rownx, col1x, colnx, nfmlas
decompile_formula(bk, data[10:], tokslen, reldelta=0, blah=1)
elif rc == XL_CONDFMT:
if not fmt_info: continue
assert bv >= 80
num_CFs, needs_recalc, browx1, browx2, bcolx1, bcolx2 = \
unpack("<6H", data[0:12])
if self.verbosity >= 1:
fprintf(self.logfile,
"\n*** WARNING: Ignoring CONDFMT (conditional formatting) record\n" \
"*** in Sheet %d (%r).\n" \
"*** %d CF record(s); needs_recalc_or_redraw = %d\n" \
"*** Bounding box is %s\n",
self.number, self.name, num_CFs, needs_recalc,
rangename2d(browx1, browx2+1, bcolx1, bcolx2+1),
)
olist = [] # updated by the function
pos = unpack_cell_range_address_list_update_pos(
olist, data, 12, bv, addr_size=8)
# print >> self.logfile, repr(result), len(result)
if self.verbosity >= 1:
fprintf(self.logfile,
"*** %d individual range(s):\n" \
"*** %s\n",
len(olist),
", ".join([rangename2d(*coords) for coords in olist]),
)
elif rc == XL_CF:
if not fmt_info: continue
cf_type, cmp_op, sz1, sz2, flags = unpack("<BBHHi", data[0:10])
font_block = (flags >> 26) & 1
bord_block = (flags >> 28) & 1
patt_block = (flags >> 29) & 1
if self.verbosity >= 1:
fprintf(self.logfile,
"\n*** WARNING: Ignoring CF (conditional formatting) sub-record.\n" \
"*** cf_type=%d, cmp_op=%d, sz1=%d, sz2=%d, flags=0x%08x\n" \
"*** optional data blocks: font=%d, border=%d, pattern=%d\n",
cf_type, cmp_op, sz1, sz2, flags,
font_block, bord_block, patt_block,
)
# hex_char_dump(data, 0, data_len)
pos = 12
if font_block:
(font_height, font_options, weight, escapement, underline,
font_colour_index, two_bits, font_esc, font_underl) = \
unpack("<64x i i H H B 3x i 4x i i i 18x", data[pos:pos+118])
font_style = (two_bits > 1) & 1
posture = (font_options > 1) & 1
font_canc = (two_bits > 7) & 1
cancellation = (font_options > 7) & 1
if self.verbosity >= 1:
fprintf(self.logfile,
"*** Font info: height=%d, weight=%d, escapement=%d,\n" \
"*** underline=%d, colour_index=%d, esc=%d, underl=%d,\n" \
"*** style=%d, posture=%d, canc=%d, cancellation=%d\n",
font_height, weight, escapement, underline,
font_colour_index, font_esc, font_underl,
font_style, posture, font_canc, cancellation,
)
pos += 118
if bord_block:
pos += 8
if patt_block:
pos += 4
fmla1 = data[pos:pos+sz1]
pos += sz1
if blah and sz1:
fprintf(self.logfile,
"*** formula 1:\n",
)
dump_formula(bk, fmla1, sz1, bv, reldelta=0, blah=1)
fmla2 = data[pos:pos+sz2]
pos += sz2
assert pos == data_len
if blah and sz2:
fprintf(self.logfile,
"*** formula 2:\n",
)
dump_formula(bk, fmla2, sz2, bv, reldelta=0, blah=1)
elif rc == XL_DEFAULTROWHEIGHT:
if data_len == 4:
bits, self.default_row_height = unpack("<HH", data[:4])
elif data_len == 2:
self.default_row_height, = unpack("<H", data)
bits = 0
fprintf(self.logfile,
"*** WARNING: DEFAULTROWHEIGHT record len is 2, " \
"should be 4; assuming BIFF2 format\n")
else:
bits = 0
fprintf(self.logfile,
"*** WARNING: DEFAULTROWHEIGHT record len is %d, " \
"should be 4; ignoring this record\n",
data_len)
self.default_row_height_mismatch = bits & 1
self.default_row_hidden = (bits >> 1) & 1
self.default_additional_space_above = (bits >> 2) & 1
self.default_additional_space_below = (bits >> 3) & 1
elif rc == XL_MERGEDCELLS:
if not fmt_info: continue
pos = unpack_cell_range_address_list_update_pos(
self.merged_cells, data, 0, bv, addr_size=8)
if blah:
fprintf(self.logfile,
"MERGEDCELLS: %d ranges\n", int_floor_div(pos - 2, 8))
assert pos == data_len, \
"MERGEDCELLS: pos=%d data_len=%d" % (pos, data_len)
elif rc == XL_WINDOW2:
if bv >= 80:
(options,
self.first_visible_rowx, self.first_visible_colx,
self.gridline_colour_index,
self.cached_page_break_preview_mag_factor,
self.cached_normal_view_mag_factor
) = unpack("<HHHHxxHH", data[:14])
else: # BIFF3-7
(options,
self.first_visible_rowx, self.first_visible_colx,
) = unpack("<HHH", data[:6])
self.gridline_colour_rgb = unpack("<BBB", data[6:9])
self.gridline_colour_index = \
nearest_colour_index(
self.book.colour_map,
self.gridline_colour_rgb,
debug=0)
self.cached_page_break_preview_mag_factor = 0 # default (60%)
self.cached_normal_view_mag_factor = 0 # default (100%)
# options -- Bit, Mask, Contents:
# 0 0001H 0 = Show formula results 1 = Show formulas
# 1 0002H 0 = Do not show grid lines 1 = Show grid lines
# 2 0004H 0 = Do not show sheet headers 1 = Show sheet headers
# 3 0008H 0 = Panes are not frozen 1 = Panes are frozen (freeze)
# 4 0010H 0 = Show zero values as empty cells 1 = Show zero values
# 5 0020H 0 = Manual grid line colour 1 = Automatic grid line colour
# 6 0040H 0 = Columns from left to right 1 = Columns from right to left
# 7 0080H 0 = Do not show outline symbols 1 = Show outline symbols
# 8 0100H 0 = Keep splits if pane freeze is removed 1 = Remove splits if pane freeze is removed
# 9 0200H 0 = Sheet not selected 1 = Sheet selected (BIFF5-BIFF8)
# 10 0400H 0 = Sheet not visible 1 = Sheet visible (BIFF5-BIFF8)
# 11 0800H 0 = Show in normal view 1 = Show in page break preview (BIFF8)
# The freeze flag specifies, if a following PANE record (6.71) describes unfrozen or frozen panes.
for attr, _unused_defval in _WINDOW2_options:
setattr(self, attr, options & 1)
options >>= 1
# print "WINDOW2: visible=%d selected=%d" \
# % (self.sheet_visible, self.sheet_selected)
#### all of the following are for BIFF <= 4W
elif bv <= 45:
if rc == XL_FORMAT or rc == XL_FORMAT2:
bk.handle_format(data)
elif rc == XL_FONT or rc == XL_FONT_B3B4:
bk.handle_font(data)
elif rc == XL_STYLE:
bk.handle_style(data)
elif rc == XL_PALETTE:
bk.handle_palette(data)
elif rc == XL_BUILTINFMTCOUNT:
bk.handle_builtinfmtcount(data)
elif rc == XL_XF4 or rc == XL_XF3: #### N.B. not XL_XF
bk.handle_xf(data)
elif rc == XL_DATEMODE:
bk.handle_datemode(data)
elif rc == XL_CODEPAGE:
bk.handle_codepage(data)
elif rc == XL_FILEPASS:
bk.handle_filepass(data)
elif rc == XL_WRITEACCESS:
bk.handle_writeaccess(data)
else:
# if DEBUG: print "SHEET.READ: Unhandled record type %02x %d bytes %r" % (rc, data_len, data)
pass
if not eof_found:
raise XLRDError("Sheet %d (%r) missing EOF record" \
% (self.number, self.name))
self.tidy_dimensions()
bk.position(oldpos)
return 1
def req_fmt_info(self):
if not self.formatting_info:
raise XLRDError("Feature requires open_workbook(..., formatting_info=True)")
##
# Determine column display width.
# <br /> -- New in version 0.6.1
# <br />
# @param colx Index of the queried column, range 0 to 255.
# Note that it is possible to find out the width that will be used to display
# columns with no cell information e.g. column IV (colx=255).
# @return The column width that will be used for displaying
# the given column by Excel, in units of 1/256th of the width of a
# standard character (the digit zero in the first font).
def computed_column_width(self, colx):
self.req_fmt_info()
if self.biff_version >= 80:
colinfo = self.colinfo_map.get(colx, None)
if colinfo is not None:
return colinfo.width
if self.standardwidth is not None:
return self.standardwidth
elif self.biff_version >= 40:
if self.gcw[colx]:
if self.standardwidth is not None:
return self.standardwidth
else:
colinfo = self.colinfo_map.get(colx, None)
if colinfo is not None:
return colinfo.width
elif self.biff_version == 30:
colinfo = self.colinfo_map.get(colx, None)
if colinfo is not None:
return colinfo.width
# All roads lead to Rome and the DEFCOLWIDTH ...
if self.defcolwidth is not None:
return self.defcolwidth * 256
return 8 * 256 # 8 is what Excel puts in a DEFCOLWIDTH record
# === helpers ===
def unpack_RK(rk_str):
flags = ord(rk_str[0])
if flags & 2:
# There's a SIGNED 30-bit integer in there!
i, = unpack('<i', rk_str)
i >>= 2 # div by 4 to drop the 2 flag bits
if flags & 1:
return i / 100.0
return float(i)
else:
# It's the most significant 30 bits of an IEEE 754 64-bit FP number
d, = unpack('<d', '\0\0\0\0' + chr(flags & 252) + rk_str[1:4])
if flags & 1:
return d / 100.0
return d
##### =============== Cell ======================================== #####
cellty_from_fmtty = {
FNU: XL_CELL_NUMBER,
FUN: XL_CELL_NUMBER,
FGE: XL_CELL_NUMBER,
FDT: XL_CELL_DATE,
FTX: XL_CELL_NUMBER, # Yes, a number can be formatted as text.
}
ctype_text = {
XL_CELL_EMPTY: 'empty',
XL_CELL_TEXT: 'text',
XL_CELL_NUMBER: 'number',
XL_CELL_DATE: 'xldate',
XL_CELL_BOOLEAN: 'bool',
XL_CELL_ERROR: 'error',
}
##
# <p>Contains the data for one cell.</p>
#
# <p>WARNING: You don't call this class yourself. You access Cell objects
# via methods of the Sheet object(s) that you found in the Book object that
# was returned when you called xlrd.open_workbook("myfile.xls").</p>
# <p> Cell objects have three attributes: <i>ctype</i> is an int, <i>value</i>
# (which depends on <i>ctype</i>) and <i>xf_index</i>.
# If "formatting_info" is not enabled when the workbook is opened, xf_index will be None.
# The following table describes the types of cells and how their values
# are represented in Python.</p>
#
# <table border="1" cellpadding="7">
# <tr>
# <th>Type symbol</th>
# <th>Type number</th>
# <th>Python value</th>
# </tr>
# <tr>
# <td>XL_CELL_EMPTY</td>
# <td align="center">0</td>
# <td>empty string u''</td>
# </tr>
# <tr>
# <td>XL_CELL_TEXT</td>
# <td align="center">1</td>
# <td>a Unicode string</td>
# </tr>
# <tr>
# <td>XL_CELL_NUMBER</td>
# <td align="center">2</td>
# <td>float</td>
# </tr>
# <tr>
# <td>XL_CELL_DATE</td>
# <td align="center">3</td>
# <td>float</td>
# </tr>
# <tr>
# <td>XL_CELL_BOOLEAN</td>
# <td align="center">4</td>
# <td>int; 1 means TRUE, 0 means FALSE</td>
# </tr>
# <tr>
# <td>XL_CELL_ERROR</td>
# <td align="center">5</td>
# <td>int representing internal Excel codes; for a text representation,
# refer to the supplied dictionary error_text_from_code</td>
# </tr>
# <tr>
# <td>XL_CELL_BLANK</td>
# <td align="center">6</td>
# <td>empty string u''. Note: this type will appear only when
# open_workbook(..., formatting_info=True) is used.</td>
# </tr>
# </table>
#<p></p>
class Cell(BaseObject):
__slots__ = ['ctype', 'value', 'xf_index']
def __init__(self, ctype, value, xf_index=None):
self.ctype = ctype
self.value = value
self.xf_index = xf_index
def __repr__(self):
if self.xf_index is None:
return "%s:%r" % (ctype_text[self.ctype], self.value)
else:
return "%s:%r (XF:%r)" % (ctype_text[self.ctype], self.value, self.xf_index)
##
# There is one and only one instance of an empty cell -- it's a singleton. This is it.
# You may use a test like "acell is empty_cell".
empty_cell = Cell(XL_CELL_EMPTY, '')
##### =============== Colinfo and Rowinfo ============================== #####
##
# Width and default formatting information that applies to one or
# more columns in a sheet. Derived from COLINFO records.
#
# <p> Here is the default hierarchy for width, according to the OOo docs:
#
# <br />"""In BIFF3, if a COLINFO record is missing for a column,
# the width specified in the record DEFCOLWIDTH is used instead.
#
# <br />In BIFF4-BIFF7, the width set in this [COLINFO] record is only used,
# if the corresponding bit for this column is cleared in the GCW
# record, otherwise the column width set in the DEFCOLWIDTH record
# is used (the STANDARDWIDTH record is always ignored in this case [see footnote!]).
#
# <br />In BIFF8, if a COLINFO record is missing for a column,
# the width specified in the record STANDARDWIDTH is used.
# If this [STANDARDWIDTH] record is also missing,
# the column width of the record DEFCOLWIDTH is used instead."""
# <br />
#
# Footnote: The docs on the GCW record say this:
# """<br />
# If a bit is set, the corresponding column uses the width set in the STANDARDWIDTH
# record. If a bit is cleared, the corresponding column uses the width set in the
# COLINFO record for this column.
# <br />If a bit is set, and the worksheet does not contain the STANDARDWIDTH record, or if
# the bit is cleared, and the worksheet does not contain the COLINFO record, the DEFCOLWIDTH
# record of the worksheet will be used instead.
# <br />"""<br />
# At the moment (2007-01-17) xlrd is going with the GCW version of the story.
# Reference to the source may be useful: see the computed_column_width(colx) method
# of the Sheet class.
# <br />-- New in version 0.6.1
# </p>
class Colinfo(BaseObject):
##
# Width of the column in 1/256 of the width of the zero character,
# using default font (first FONT record in the file).
width = 0
##
# XF index to be used for formatting empty cells.
xf_index = -1
##
# 1 = column is hidden
hidden = 0
##
# Value of a 1-bit flag whose purpose is unknown
# but is often seen set to 1
bit1_flag = 0
##
# Outline level of the column, in range(7).
# (0 = no outline)
outline_level = 0
##
# 1 = column is collapsed
collapsed = 0
##
# Height and default formatting information that applies to a row in a sheet.
# Derived from ROW records.
# <br /> -- New in version 0.6.1
class Rowinfo(BaseObject):
##
# Height of the row, in twips. One twip == 1/20 of a point
height = 0
##
# 0 = Row has custom height; 1 = Row has default height
has_default_height = 0
##
# Outline level of the row
outline_level = 0
##
# 1 = Outline group starts or ends here (depending on where the
# outline buttons are located, see WSBOOL record [TODO ??]),
# <i>and</i> is collapsed
outline_group_starts_ends = 0
##
# 1 = Row is hidden (manually, or by a filter or outline group)
hidden = 0
##
# 1 = Row height and default font height do not match
height_mismatch = 0
##
# 1 = the xf_index attribute is usable; 0 = ignore it
has_default_xf_index = 0
##
# Index to default XF record for empty cells in this row.
# Don't use this if has_default_xf_index == 0.
xf_index = -9999
##
# This flag is set, if the upper border of at least one cell in this row
# or if the lower border of at least one cell in the row above is
# formatted with a thick line style. Thin and medium line styles are not
# taken into account.
additional_space_above = 0
##
# This flag is set, if the lower border of at least one cell in this row
# or if the upper border of at least one cell in the row below is
# formatted with a medium or thick line style. Thin line styles are not
# taken into account.
additional_space_below = 0
|
A quote made famous by one of my favorite philosophers, Heraclitus.
A quote that has uttered from my boyfriend’s lips, too many times to count.
I don’t know about you, but I like to change things up when it comes to breakfast. Of course I cycle through the usual rotation, but every once in a while, it becomes time to try something different.
So when I saw the following recipe on the Minimalistbaker.com, I kept it Pinned until I accumulated some bread: ever since I’ve gone gluten free, bread has become a rarity in this kitchen.
The following recipe was adapted from Minimalist Baker’s recipe, as I had nowhere near enough bread to make their 10 serving dish. Below you will find the two serving recipe. Check out their recipe for the full 10 serving dish.
Have additional maple syrup for serving.
-Grease a small casserole dish with dairy free butter. I used a 1-qt dish.
-Add the cubed bread to the dish.
-In a medium sized bowl, whisk the eggs, milk alternative, spices, syrup, and extract.
-Pour this mixture over the bread. Using the back of a fork, press all of the pieces of bread down so that they are covered with the mixture.
-How easy was that? Now just place some Saran Wrap over the dish, and place in the refrigerator until the next morning.
-When you are ready to bake, preheat the oven to 350 degrees. Take off the Saran Wrap, and sprinkle the brown sugar, and cinnamon on top of the bread mixture. You can also add nuts, but I never put nuts on my french toast, so I opted out.
-Bake in the oven for about 35 minutes or so.
-Serve with maple syrup and enjoy the Pumpkiny goodness!
|
from django.db import models
class PolicyDomain(models.Model):
title = models.CharField(max_length=100, unique=True)
description = models.TextField()
class Meta:
ordering = ['title']
verbose_name = "Policy Domain"
verbose_name_plural = "Policy Domains"
def __str__(self):
return self.title
class Language(models.Model):
code = models.CharField(max_length=2, unique=True)
title = models.CharField(max_length=100, unique=True)
class Meta:
verbose_name = "Language"
verbose_name_plural = "Languages"
def __str__(self):
return self.title
class ExternalResource(models.Model):
title = models.CharField(max_length=100, unique=True)
url = models.URLField()
api_url = models.URLField()
class Meta:
verbose_name = "External Resource"
verbose_name_plural = "External Resources"
def __str__(self):
return self.title
class UnitCategory(models.Model):
title = models.CharField(max_length=100, unique=True)
identifier = models.CharField(max_length=100, unique=True)
class Meta:
verbose_name = "Unit Category"
verbose_name_plural = "Unit Categories"
def __str__(self):
return self.title
class Unit(models.Model):
title = models.CharField(max_length=50, unique=True)
description = models.TextField()
unit_category = models.ForeignKey(UnitCategory)
identifier = models.CharField(max_length=100, unique=True)
class Meta:
verbose_name = "Unit"
verbose_name_plural = "Units"
def __str__(self):
return self.title
class DateFormat(models.Model):
"""
Holds different formats for dates
"""
# Based on https://docs.python.org/2/library/datetime.html#strftime-strptime-behavior
format = models.CharField(max_length=50, unique=True)
example = models.CharField(max_length=50)
# Based on http://en.wikipedia.org/wiki/Date_format_by_country
symbol = models.CharField(max_length=50)
class Meta:
verbose_name = "Date Format"
verbose_name_plural = "Date Formats"
def __str__(self):
return self.example
class DataClass(models.Model):
"""
Refers to a Policy Compass Class
"""
title = models.CharField(max_length=100, unique=True)
description = models.TextField()
code_type = models.CharField(max_length=30, blank=True)
class Meta:
verbose_name = "Class"
verbose_name_plural = "Classes"
def __str__(self):
return self.title
class Individual(models.Model):
title = models.CharField(max_length=100)
code = models.CharField(max_length=30, blank=True)
data_class = models.ForeignKey(DataClass)
class Meta:
verbose_name = "Individual"
verbose_name_plural = "Individuals"
def __str__(self):
return self.title
|
St. Lawrence County is a vast and undiscovered area of New York’s Northern Border reaching from the St. Lawrence River to the foothills of the Adirondack mountains. Rural farms, including a large Amish community, meet small town charm with colleges, industry and history. The county’s diverse natural and cultural resources make St. Lawrence a perfect destination for those who love paddling, waterfalls, hiking, folk music & local art, wine tasting and farm to table cuisine. St. Lawrence County also provides a perfect place for fisherman who love fishing bass, muskie and carp.
This web site offers more information on tourism, entertainment and community resources, local businesses, relocating to the area, and the latest news and events.
To view a slideshow of some of the different activities available in St. Lawrence County click here.
|
# Copyright (c) 2013 OpenStack Foundation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import functools
from oslo.config import cfg
from neutron.common import constants as const
from neutron.openstack.common import log as logging
from neutron.plugins.ml2 import driver_api as api
from neutronclient.common import exceptions
from neutron.openstack.common import local
from neutron.openstack.common import context
from neutron import context as n_context
from neutron.openstack.common import importutils
from neutron.openstack.common import excutils
from neutron.plugins.l2_proxy.agent import neutron_keystoneclient as hkc
LOG = logging.getLogger(__name__)
try:
from neutronclient.v2_0 import client as neutronclient
except ImportError:
neutronclient = None
LOG.info('neutronclient not available')
CASCADING = 'cascading'
class RequestContext(context.RequestContext):
"""
Stores information about the security context under which the user
accesses the system, as well as additional request information.
"""
def __init__(self, auth_token=None, username=None, password=None,
aws_creds=None, tenant=None,
tenant_id=None, auth_url=None, roles=None, is_admin=False,
insecure=True,region_name=None, read_only=False,
show_deleted=False,owner_is_tenant=True, overwrite=True,
trust_id=None, trustor_user_id=None,
**kwargs):
"""
:param overwrite: Set to False to ensure that the greenthread local
copy of the index is not overwritten.
:param kwargs: Extra arguments that might be present, but we ignore
because they possibly came in from older rpc messages.
"""
super(RequestContext, self).__init__(auth_token=auth_token,
user=username, tenant=tenant,
is_admin=is_admin,
read_only=read_only,
show_deleted=show_deleted,
request_id='unused')
self.username = username
self.password = password
self.aws_creds = aws_creds
self.tenant_id = tenant_id
self.auth_url = auth_url
self.roles = roles or []
self.region_name = region_name
self.insecure = insecure
self.owner_is_tenant = owner_is_tenant
if overwrite or not hasattr(local.store, 'context'):
self.update_store()
self._session = None
self.trust_id = trust_id
self.trustor_user_id = trustor_user_id
def update_store(self):
local.store.context = self
def to_dict(self):
return {'auth_token': self.auth_token,
'username': self.username,
'password': self.password,
'aws_creds': self.aws_creds,
'tenant': self.tenant,
'tenant_id': self.tenant_id,
'trust_id': self.trust_id,
'insecure': self.insecure,
'trustor_user_id': self.trustor_user_id,
'auth_url': self.auth_url,
'roles': self.roles,
'is_admin': self.is_admin,
'region_name': self.region_name}
@classmethod
def from_dict(cls, values):
return cls(**values)
@property
def owner(self):
"""Return the owner to correlate with an image."""
return self.tenant if self.owner_is_tenant else self.user
def get_admin_context(read_deleted="no"):
return RequestContext(is_admin=True)
class OpenStackClients(object):
'''
Convenience class to create and cache client instances.
'''
def __init__(self, context):
self.context = context
self._neutron = None
self._keystone = None
@property
def auth_token(self):
# if there is no auth token in the context
# attempt to get one using the context username and password
return self.context.auth_token or self.keystone().auth_token
def keystone(self):
if self._keystone:
return self._keystone
self._keystone = hkc.KeystoneClient(self.context)
return self._keystone
def url_for(self, **kwargs):
return self.keystone().url_for(**kwargs)
def neutron(self):
if neutronclient is None:
return None
if self._neutron:
return self._neutron
con = self.context
if self.auth_token is None:
LOG.error("Neutron connection failed, no auth_token!")
return None
if self.context.region_name is None:
management_url = self.url_for(service_type='network',
endpoint_type='publicURL')
else:
management_url = self.url_for(
service_type='network',
attr='region',
endpoint_type='publicURL',
filter_value=self.context.region_name)
args = {
'auth_url': con.auth_url,
'insecure': self.context.insecure,
'service_type': 'network',
'token': self.auth_token,
'endpoint_url': management_url
}
self._neutron = neutronclient.Client(**args)
return self._neutron
def get_cascading_neutron_client():
context = n_context.get_admin_context_without_session()
auth_url = 'https://%s:%s/%s/%s' %(cfg.CONF.keystone_authtoken.auth_host,
cfg.CONF.keystone_authtoken.auth_port,
cfg.CONF.keystone_authtoken.auth_admin_prefix,
cfg.CONF.keystone_authtoken.auth_version)
kwargs = {'auth_token': None,
'username': cfg.CONF.keystone_authtoken.admin_user,
'password': cfg.CONF.keystone_authtoken.admin_password,
'aws_creds': None,
'tenant': cfg.CONF.keystone_authtoken.admin_tenant_name,
'auth_url': auth_url,
'insecure': cfg.CONF.keystone_authtoken.insecure,
'roles': context.roles,
'is_admin': context.is_admin,
'region_name': cfg.CONF.cascading_os_region_name}
reqCon = RequestContext(**kwargs)
openStackClients = OpenStackClients(reqCon)
neutronClient = openStackClients.neutron()
return neutronClient
def check_neutron_client_valid(function):
@functools.wraps(function)
def decorated_function(self, method_name, *args, **kwargs):
retry = 0
while(True):
try:
return function(self, method_name, *args, **kwargs)
except exceptions.Unauthorized:
retry = retry + 1
if(retry <= 3):
self.client = get_cascading_neutron_client()
continue
else:
with excutils.save_and_reraise_exception():
LOG.error(_('Try 3 times, Unauthorized.'))
return None
return decorated_function
class CascadeNeutronClient(object):
def __init__(self):
#mode is cascading or cascaded
self.client = get_cascading_neutron_client()
@check_neutron_client_valid
def __call__(self, method_name, *args, **kwargs):
method = getattr(self.client, method_name)
if method:
return method(*args, **kwargs)
else:
raise Exception('Can not find the method')
@check_neutron_client_valid
def execute(self, method_name, *args, **kwargs):
method = getattr(self.client, method_name)
if method:
return method(*args, **kwargs)
else:
raise Exception('Can not find the method')
class Cascaded2MechanismDriver(api.MechanismDriver):
def __init__(self):
super(Cascaded2MechanismDriver, self).__init__()
self.notify_cascading = False
if cfg.CONF.cascading_os_region_name:
self.cascading_neutron_client = CascadeNeutronClient()
self.notify_cascading = True
def initialize(self):
LOG.debug(_("Experimental L2 population driver"))
self.rpc_ctx = n_context.get_admin_context_without_session()
def get_cascading_port_id(self, cascaded_port_name):
try:
return cascaded_port_name.split('@')[1]
except Exception:
return None
def update_port_postcommit(self, context):
if not self.notify_cascading:
return
cur_port = context.current
orig_port = context._original_port
LOG.debug(_("update_port_postcommit update "
"current_port:%s original:%s") % (cur_port, orig_port))
if not (context.original_host and context.host
and const.DEVICE_OWNER_COMPUTER in cur_port['device_owner']):
return
if context.host != context.original_host:
cascading_port_id = self.get_cascading_port_id(cur_port['name'])
if cascading_port_id:
update_attrs = {'port': {'binding:profile': {'refresh_notify': True}}}
for i in range(const.UPDATE_RETRY):
try:
self.cascading_neutron_client('update_port', cascading_port_id, update_attrs)
LOG.debug(_("host_id(%s -> %s) changed, notify the cascading")
%(context.original_host, context.host))
break
except Exception as e:
LOG.debug(_("Notify cascading refresh port failed(%s)! try %d") % (str(e), i))
continue
|
Disclamer: I’d like to thank Philips for sponsoring this article and sending out their ADR810 dash cam for me to test out. As always, all opinions are my own and sponsorship doesn’t influence my content in any way. Read my full disclaimer page here.
While motorcyclists power along our roads with action cams fitted on their helmets, dash cams are becoming popular among car drivers who want to capture proof of road rage and any unfortunate accidents.
Wanting to show off their new range of dash cams, Philips sent out their top-of-the-line Philips ADR810 dash cam (currently £145) so that I can test it out myself.
Why do you even need a dash cam?
If that isn’t reason enough to sway you, some car insurance providers offer discounts if you use a dash cam. Are you a little more interested now? I thought so.
To give you a little more insight about dash cam technology and what it means for you, I asked Yiqiao Wang, the Mobility Product Manager at Philips, a few questions. Further down this page I’ll also go through my first impressions of the ADR810 dash cam.
Fabio: Dashcams are becoming increasingly popular now; why do you think that is?
Philips: In the UK, a lot of media coverage has been done on ‘crash for cash’ scammers recently. According to the Insurance Fraud Bureau, ‘crash for cash’ costs the UK £340m every year. The use of a dash cam could really help honest drivers to prove they are not in fault. Drivers also look for a hassle-free driving experience, they don’t want to think about possible minor or major accidents on the road. Most importantly, they don’t want to worry about finding any evidence when being in an accident. Therefore dash cam is their daily reliable companion. Of course some other people use dash cam to record funny moments on the road as well.
Fabio: What are the key features customers need to look for in a dash cam, and why?
Fabio: What makes Philips’ ADR810 dash cam special?
Philips: Philips ADR810 guarantees best-in-class video quality in the market. It delivers a very high-quality and superior resolution view during the night time. When testing the product during the night, you can assess the superior quality of the recording. We also added an easy-to-use feature which is the “EasyCapture” button on the cigarette lighter. When you feel any potential danger, you can press this button directly and an emergency recording starts immediately, without the need to press some other buttons on the dash cam.
Fabio: How do you make the ADR810 accessible for people who aren’t comfortable using technology?
Philips: Philips ADR810 has been designed as an easy-to-use plug-and-play product. The only thing to do is fitting the dash cam on your windshield and then it will do all the work for you. It turns on automatically when you start the car. When it detects any movement in front of your car it starts recording automatically. The G-sensor included in the device detects any harsh brakes or impacts and records every detail into the emergency video file. When you want to check the video footage, it is just as easy as reading an SD card in your computer. So, no need to worry about any complexity of using technology at all. Philips ADR810 does everything for you.
Fabio: Your ‘Fatigue Index & Driver Alert’ feature looks interesting. Is this purely based on time spent driving, or does it detect fatigue in a more specific way?
Philips: It is an algorithm pre-installed in the software. It registers the total driving time and at what precise time you are driving. From there, Philips ADR180 is able to tell you if you should take a break. However, this is not real-time monitoring or detection.
Let’s quickly get the geeky stuff out of the way. The Philips ADR810 dash cam’s 156° wide angle lens records at full HD 1080p resolution and at 30 frames per second (which is plenty for a dash cam). It has an “Automatic Collision Detection” feature that auto-saves a clip when you hit something, and an “Emergency EasyCapture” feature to manually record when you need it. You can use the latter by pressing the button found on the cigarette lighter adapter.
That aforementioned feature called Fatigue index and driver alert “shows the evolution of a driver’s fatigue and will produce a visual and audial warning message when the driver should have a rest” — but as you may have read in the interview, it’s just based on total driving time and the time of day. It may not provide 100% accurate data based on you personally, although it’s still useful to have a reminder to take a break.
I’m liking the ADR810’s all-black design as it keeps the dash cam quite stealthy. The display is a nicely sized 2.7″ LCD panel, and while it isn’t touch-enabled Philips give you four buttons on the right side of the display to navigate with. It’s also quite a compact dash cam, sizing up at 106.7 x 50.0 x 32.5mm and only 83g in weight. That means you should comfortably be able to squeeze the ADR810 behind your rear-view mirror like I did — assuming you want to keep it out of sight.
Unfortunately however, Philips doesn’t throw in the microSD card you need to use the dash cam. (Brownie points taken away.) That’s a shame — especially considering this this SanDisk 16GB microSD only costs £5 on Amazon — because it would have made the customer experience that much easier. It’s not a huge added cost for us as the customer, but in my opinion Philips should just throw it into the package for convenience.
What’s it like to use the Philips ADR810 dash cam?
Although I would have liked a touch-enabled display on this, I’ve found the ADR810 dash cam easy to use and navigate. As soon as you start your car the dash cam starts working on its own, so it’s a case of plug-and-forget about it. Philips gave the ADR810 a grid-based menu so that you can only see 6 options at a time, and you navigate using the up/down buttons on the dash cam itself to set things like the recording resolution, language and even the sensitivity of the camera’s collision detection feature.
I’ll need to use it for a longer period of time to really get a feel for it, but on my first day I’ve found it quite simple and easy.
If you’d like to see some sample clips of the Philips ADR810, leave me a comment below and I’ll make it happen!
|
from collections import deque
from semstr.constraints import Constraints, Direction
from semstr.util.amr import LABEL_ATTRIB
from semstr.validation import CONSTRAINTS
from ucca import core, layer0, layer1
from ucca.layer0 import NodeTags
from ucca.layer1 import EdgeTags
from .edge import Edge
from .node import Node
from ..action import Actions
from ..config import Config
class InvalidActionError(AssertionError):
def __init__(self, *args, is_type=False):
super().__init__(*args)
self.is_type = is_type
class State:
"""
The parser's state, responsible for applying actions and creating the final Passage
:param passage: a Passage object to get the tokens from, and everything else if training
"""
def __init__(self, passage):
self.args = Config().args
self.constraints = CONSTRAINTS.get(passage.extra.get("format"), Constraints)(implicit=self.args.implicit)
self.log = []
self.finished = False
self.passage = passage
try:
l0 = passage.layer(layer0.LAYER_ID)
except KeyError as e:
raise IOError("Passage %s is missing layer %s" % (passage.ID, layer0.LAYER_ID)) from e
try:
l1 = passage.layer(layer1.LAYER_ID)
except KeyError:
l1 = layer1.Layer1(passage)
self.labeled = any(n.outgoing or n.attrib.get(LABEL_ATTRIB) for n in l1.all)
self.terminals = [Node(i, orig_node=t, root=passage, text=t.text, paragraph=t.paragraph, tag=t.tag)
for i, t in enumerate(l0.all, start=1)]
self.stack = []
self.buffer = deque()
self.nodes = []
self.heads = set()
self.need_label = None # If we are waiting for label_node() to be called, which node is to be labeled by it
self.root = self.add_node(orig_node=l1.heads[0], is_root=True) # Root is not in the buffer
self.stack.append(self.root)
self.buffer += self.terminals
self.nodes += self.terminals
self.actions = [] # History of applied actions
self.type_validity_cache = {}
def is_valid_action(self, action):
"""
:param action: action to check for validity
:return: is the action (including tag) valid in the current state?
"""
valid = self.type_validity_cache.get(action.type_id)
if valid is None:
try:
self.check_valid_action(action)
valid = True
except InvalidActionError as e:
valid = False
if e.is_type:
self.type_validity_cache[action.type_id] = valid
return valid
def check_valid_action(self, action, message=False):
"""
Raise InvalidActionError if the action is invalid in the current state
:param action: action to check for validity
:param message: whether to add an informative message to the thrown exception
"""
def _check_possible_node():
self.check(self.node_ratio() < self.args.max_node_ratio,
message and "Non-terminals/terminals ratio: %.3f" % self.args.max_node_ratio, is_type=True)
for head in self.heads:
self.check(head.height <= self.args.max_height,
message and "Graph height: %d" % self.args.max_height, is_type=True)
def _check_possible_parent(node, t):
self.check(node.text is None, message and "Terminals may not have children: %s" % node.text, is_type=True)
if self.args.constraints and t is not None:
for rule in self.constraints.tag_rules:
violation = rule.violation(node, t, Direction.outgoing, message=message)
self.check(violation is None, violation)
self.check(self.constraints.allow_parent(node, t),
message and "%s may not be a '%s' parent (currently %s)" % (
node, t, ", ".join(map(str, node.outgoing)) or "childless"))
self.check(not self.constraints.require_implicit_childless or not node.implicit,
message and "Implicit nodes may not have children: %s" % s0, is_type=True)
def _check_possible_child(node, t):
self.check(node is not self.root, message and "Root may not have parents", is_type=True)
if self.args.constraints and t is not None:
self.check(not t or (node.text is None) != (t == EdgeTags.Terminal),
message and "Edge tag must be %s iff child is terminal, but node %s has edge tag %s" %
(EdgeTags.Terminal, node, t))
for rule in self.constraints.tag_rules:
violation = rule.violation(node, t, Direction.incoming, message=message)
self.check(violation is None, violation)
self.check(self.constraints.allow_child(node, t),
message and "%s may not be a '%s' child (currently %s, %s)" % (
node, t, ", ".join(map(str, node.incoming)) or "parentless",
", ".join(map(str, node.outgoing)) or "childless"))
self.check(self.constraints.possible_multiple_incoming is None or t is None or
action.remote or t in self.constraints.possible_multiple_incoming or
all(e.remote or e.tag in self.constraints.possible_multiple_incoming for e in node.incoming),
message and "Multiple non-remote '%s' parents not allowed for %s" % (t, node))
def _check_possible_edge(p, c, t):
_check_possible_parent(p, t)
_check_possible_child(c, t)
if self.args.constraints and t is not None:
if p is self.root:
self.check(self.constraints.top_level_allowed is None or not t or
t in self.constraints.top_level_allowed, message and "Root may not have %s edges" % t)
else:
self.check(self.constraints.top_level_only is None or
t not in self.constraints.top_level_only, message and "Only root may have %s edges" % t)
self.check(self.constraints.allow_root_terminal_children or p is not self.root or c.text is None,
message and "Terminal child '%s' for root" % c, is_type=True)
if self.constraints.multigraph: # Nodes may be connected by more than one edge
edge = Edge(p, c, t, remote=action.remote)
self.check(self.constraints.allow_edge(edge), message and "Edge not allowed: %s (currently: %s)" % (
edge, ", ".join(map(str, p.outgoing)) or "childless"))
else: # Simple graph, i.e., no more than one edge between the same pair of nodes
self.check(c not in p.children, message and "%s is already %s's child" % (c, p), is_type=True)
self.check(p not in c.descendants, message and "Detected cycle by edge: %s->%s" % (p, c), is_type=True)
def _check_possible_label():
self.check(self.args.node_labels, message and "Node labels disabled", is_type=True)
try:
node = self.stack[-action.tag]
except IndexError:
node = None
self.check(node is not None, message and "Labeling invalid node %s when stack size is %d" % (
action.tag, len(self.stack)))
self.check(not node.labeled, message and "Labeling already-labeled node: %s" % node, is_type=True)
self.check(node.text is None, message and "Terminals do not have labels: %s" % node, is_type=True)
if self.args.constraints:
self.check(self.constraints.allow_action(action, self.actions),
message and "Action not allowed: %s " % action + (
("after " + ", ".join("%s" % a for a in self.actions[-3:])) if self.actions else "as first"))
if action.is_type(Actions.Finish):
self.check(not self.buffer, "May only finish at the end of the input buffer", is_type=True)
if self.args.swap: # Without swap, the oracle may be incapable even of single action
self.check(self.root.outgoing or all(n is self.root or n.is_linkage or n.text for n in self.nodes),
message and "Root has no child at parse end", is_type=True)
for n in self.nodes:
self.check(not self.args.require_connected or n is self.root or n.is_linkage or n.text or
n.incoming, message and "Non-terminal %s has no parent at parse end" % n, is_type=True)
self.check(not self.args.node_labels or n.text or n.labeled,
message and "Non-terminal %s has no label at parse end" % n, is_type=True)
else:
self.check(self.action_ratio() < self.args.max_action_ratio,
message and "Actions/terminals ratio: %.3f" % self.args.max_action_ratio, is_type=True)
if action.is_type(Actions.Shift):
self.check(self.buffer, message and "Shifting from empty buffer", is_type=True)
elif action.is_type(Actions.Label):
_check_possible_label()
else: # Unary actions
self.check(self.stack, message and "%s with empty stack" % action, is_type=True)
s0 = self.stack[-1]
if action.is_type(Actions.Reduce):
if s0 is self.root:
self.check(self.root.labeled or not self.args.node_labels,
message and "Reducing root without label", is_type=True)
elif not s0.text:
self.check(not self.args.require_connected or s0.is_linkage or s0.incoming,
message and "Reducing parentless non-terminal %s" % s0, is_type=True)
self.check(not self.constraints.required_outgoing or
s0.outgoing_tags.intersection((EdgeTags.Terminal, EdgeTags.Punctuation, "")) or
s0.outgoing_tags.intersection(self.constraints.required_outgoing),
message and "Reducing non-terminal %s without %s edge" % (
s0, self.constraints.required_outgoing), is_type=True)
self.check(not self.args.node_labels or s0.text or s0.labeled,
message and "Reducing non-terminal %s without label" % s0, is_type=True)
elif action.is_type(Actions.Swap):
# A regular swap is possible since the stack has at least two elements;
# A compound swap is possible if the stack is longer than the distance
distance = action.tag or 1
self.check(1 <= distance < len(self.stack), message and "Invalid swap distance: %d" % distance)
swapped = self.stack[-distance - 1]
# To prevent swap loops: only swap if the nodes are currently in their original order
self.check(self.swappable(s0, swapped),
message and "Already swapped nodes: %s (swap index %g) <--> %s (swap index %g)"
% (swapped, swapped.swap_index, s0, s0.swap_index))
else:
pct = self.get_parent_child_tag(action)
self.check(pct, message and "%s with len(stack) = %d" % (action, len(self.stack)), is_type=True)
parent, child, tag = pct
if parent is None:
_check_possible_child(child, tag)
_check_possible_node()
elif child is None:
_check_possible_parent(parent, tag)
_check_possible_node()
else: # Binary actions
_check_possible_edge(parent, child, tag)
@staticmethod
def swappable(right, left):
return left.swap_index < right.swap_index
def is_valid_label(self, label):
"""
:param label: label to check for validity
:return: is the label valid in the current state?
"""
try:
self.check_valid_label(label)
except InvalidActionError:
return False
return True
def check_valid_label(self, label, message=False):
if self.args.constraints and label is not None:
valid = self.constraints.allow_label(self.need_label, label)
self.check(valid, message and "May not label %s as %s: %s" % (self.need_label, label, valid))
@staticmethod
def check(condition, *args, **kwargs):
if not condition:
raise InvalidActionError(*args, **kwargs)
# noinspection PyTypeChecker
def transition(self, action):
"""
Main part of the parser: apply action given by oracle or classifier
:param action: Action object to apply
"""
action.apply()
self.log = []
pct = self.get_parent_child_tag(action)
if pct:
parent, child, tag = pct
if parent is None:
parent = action.node = self.add_node(orig_node=action.orig_node)
if child is None:
child = action.node = self.add_node(orig_node=action.orig_node, implicit=True)
action.edge = self.add_edge(Edge(parent, child, tag, remote=action.remote))
if action.node:
self.buffer.appendleft(action.node)
elif action.is_type(Actions.Shift): # Push buffer head to stack; shift buffer
self.stack.append(self.buffer.popleft())
elif action.is_type(Actions.Label):
self.need_label = self.stack[-action.tag] # The parser is responsible to choose a label and set it
elif action.is_type(Actions.Reduce): # Pop stack (no more edges to create with this node)
self.stack.pop()
elif action.is_type(Actions.Swap): # Place second (or more) stack item back on the buffer
distance = action.tag or 1
s = slice(-distance - 1, -1)
self.log.append("%s <--> %s" % (", ".join(map(str, self.stack[s])), self.stack[-1]))
self.buffer.extendleft(reversed(self.stack[s])) # extendleft reverses the order
del self.stack[s]
elif action.is_type(Actions.Finish): # Nothing left to do
self.finished = True
else:
raise ValueError("Invalid action: %s" % action)
if self.args.verify:
intersection = set(self.stack).intersection(self.buffer)
assert not intersection, "Stack and buffer overlap: %s" % intersection
action.index = len(self.actions)
self.actions.append(action)
self.type_validity_cache = {}
def add_node(self, **kwargs):
"""
Called during parsing to add a new Node (not core.Node) to the temporary representation
:param kwargs: keyword arguments for Node()
"""
node = Node(len(self.nodes), swap_index=self.calculate_swap_index(), root=self.passage, **kwargs)
if self.args.verify:
assert node not in self.nodes, "Node already exists"
self.nodes.append(node)
self.heads.add(node)
self.log.append("node: %s (swap_index: %g)" % (node, node.swap_index))
if self.args.use_gold_node_labels:
self.need_label = node # Labeled the node as soon as it is created rather than applying a LABEL action
return node
def calculate_swap_index(self):
"""
Update a new node's swap index according to the nodes before and after it.
Usually the swap index is just the index, i.e., len(self.nodes).
If the buffer is not empty and its head is not a terminal, it means that it is a non-terminal created before.
In that case, the buffer head's index will be lower than the new node's index, so the new node's swap index will
be the arithmetic mean between the previous node (stack top) and the next node (buffer head).
Then, in the validity check on the SWAP action, we will correctly identify this node as always having appearing
before the current buffer head. Otherwise, we would prevent swapping them even though it should be valid
(because they have never been swapped before).
"""
if self.buffer:
b0 = self.buffer[0]
if self.stack and (b0.text is not None or b0.swap_index <= len(self.nodes)):
s0 = self.stack[-1]
return (s0.swap_index + b0.swap_index) / 2
return None
def add_edge(self, edge):
edge.add()
self.heads.discard(edge.child)
self.log.append("edge: %s" % edge)
return edge
PARENT_CHILD = (
((Actions.LeftEdge, Actions.LeftRemote), (-1, -2)),
((Actions.RightEdge, Actions.RightRemote), (-2, -1)),
((Actions.Node, Actions.RemoteNode), (None, -1)),
((Actions.Implicit,), (-1, None)),
)
def get_parent_child_tag(self, action):
try:
for types, indices in self.PARENT_CHILD:
if action.is_type(*types):
parent, child = [None if i is None else self.stack[i] for i in indices]
break
else:
return None
return parent, child, (EdgeTags.Terminal if child and child.text else
EdgeTags.Punctuation if child and child.children and all(
c.tag == NodeTags.Punct for c in child.children)
else action.tag) # In unlabeled parsing, keep a valid graph
except IndexError:
return None
def label_node(self, label):
self.need_label.label = label
self.need_label.labeled = True
self.log.append("label: %s" % self.need_label)
self.type_validity_cache = {}
self.need_label = None
def create_passage(self, verify=True, **kwargs):
"""
Create final passage from temporary representation
:param verify: fail if this results in an improper passage
:return: core.Passage created from self.nodes
"""
Config().print("Creating passage %s from state..." % self.passage.ID, level=2)
passage = core.Passage(self.passage.ID)
passage_format = kwargs.get("format") or self.passage.extra.get("format")
if passage_format:
passage.extra["format"] = passage_format
self.passage.layer(layer0.LAYER_ID).copy(passage)
l0 = passage.layer(layer0.LAYER_ID)
l1 = layer1.Layer1(passage)
self.root.node = l1.heads[0]
if self.args.node_labels:
self.root.set_node_label()
if self.labeled: # We have a reference passage
self.root.set_node_id()
Node.attach_nodes(l0, l1, self.nodes, self.labeled, self.args.node_labels, verify)
return passage
def node_ratio(self):
return (len(self.nodes) / len(self.terminals) - 1) if self.terminals else 0
def action_ratio(self):
return (len(self.actions) / len(self.terminals)) if self.terminals else 0
def str(self, sep):
return "stack: [%-20s]%sbuffer: [%s]" % (" ".join(map(str, self.stack)), sep,
" ".join(map(str, self.buffer)))
def __str__(self):
return self.str(" ")
def __eq__(self, other):
return self.stack == other.stack and self.buffer == other.buffer and \
self.nodes == other.nodes
def __hash__(self):
return hash((tuple(self.stack), tuple(self.buffer), tuple(self.nodes)))
|
The parishes of St. Mary in Tomah and St. Andrew in Warrens merged together to form Queen of the Apostles Parish on July 1, 2015.
For well over 100 years St. Mary's Catholic Church, with it's majestic clock tower and steeple, has stood sentinel over the city of Tomah, Wisconsin. Situated on the top of the highest hill in town, it is a landmark that can be seen for miles.
The parish was incorporated under the title of the Immaculate Conception of the Blessed Virgin Mary in 1867, but common usage gave it the shorter title of "Saint Mary's". The structure that serves our house of worship today was built in 1898 under the guidance of Father Louis Wurst, pastor at that time. It was constructed at a cost of between $17,000 to $18,000. In 1899, three clusters of lights and a bell weighing 1500 pounds were added to the church. In 1907, the pipe organ was installed; it was enlarged and electrified in 1932.
The church building was modeled after the Catholic Church on Saint Mary's Ridge, south of Tomah. The architecture is of Gothic design, characterized by pointed arches and vaulting. The red brick exterior is accented by projecting buttresses that give stability to the walls of the structure. Arches predominate over the doorways, in the steeple louvers, the stained glass windows, and above the steeple clocks.
The interior of the building also contains many arches. The ceiling is a series of steep, cathedral-type arches. The stained glass windows with arched tops depict beatified saints of the Catholic Church and are identified with the names of the families that donated funds for their purchase. Arches are visible in the Stations of the Cross which adorn the side walls, in the railing around the choir loft, in the painted designs around the statues of the Blessed Virgin Mary and Saint Joseph, in the intricate designs in the hanging lights fixtures, and in the altar backscreen.
In 1939, Father J. B. Brudermann was pastor of Saint Mary's and Bishop McGavick offered him an assistant priest if he would take on the added duties and responsibilities of a small parish known as Saint Andrew's, located several miles north of Tomah. Father Brudermann agreed and a church building was moved from Millston to to Warrens, where many parishioners now lived. Saint Andrew's continues to be served by the pastor of Saint Mary's. Saint Andrew's parish celebrated the centennial of their church building in the fall of 1996.
Of the many changes that have taken place over the years, the most significant came as a result of Vatican II. On November 28, 1964, Mass at Saint Mary's was said for the first time with the priest facing the people. The Mass was now celebrated in English rather than Latin, and lay parishioners began to take on a bigger roll in assisting the pastor through church councils, liturgy, finance committees, and the like.
In 1990, an extensive renovation project was approved by parishioners at Saint Mary's. Exterior work during the summer months involved removing and replacing the roof, installing gutters and flashing, and replacing over 2,000 bricks, staining them to match exactly the existing brickwork.
In 1992, all of the stained glass windows were restored and re-leaded. Interior work began in 1993, and Masses were held in the school's multi-purpose room from January through Easter. Renovation involved replacing the wiring, refinishing the pews, installing a new speaker system, purchasing all new altar furnishings, carpeting the entire floor, and painting the interior. Upon entering the church, a completely new look was created by making a gathering room with a glass partition separating it from the worship area.
In 1996, a barrier-free entrance with elevator access to the school and church basement and a handicapped accessible ramp leading into the church were added. The church hall, P.C.C.W. kitchen, a meeting room and a ladies restroom were also completely renovated.
St. Andrew the Apostle Catholic Church in Warrens was dedicated in 1940, but its history dates back to 1896. According to journal records written in German, St. Andreas (the German spelling of St. Andrew) Parish was organized in March 1896 when families living in an area known as St. Andrew's Settlement in the Town of Millston in Jackson County voted to build a 22-foot by 40-foot church.
Serving on the building committee were Andrew Kirzinger, foreman; Mathew Lindner, secretary; John Ibinger, trustee; Joseph Meyer, John Schober, Michael Kirzinger, John Schuster, Theodore Koebler, Jacob Obermeier, Joseph Buckner, Adam Birner and George Gross. These men, along with others, provided the labor. The committee voted to borrow $100 from H.B. Mills, the founder of Millston, at an interest rate of 2 percent. A "promising note" signed by the members of the building committee and a Mrs. Riedel (first name not recorded) stated each would donate $10, because the loan did not cover all the construction costs. Mathew Lindner donated the land for the church. The men intended to have the building finished by Nov. 30, 1896, for the Feast of St. Andrew. But through unforeseen delays, it was nearly the end of December before the church was complete.
On Jan. 20, 1897, a horse-drawn sleigh was sent to Millston to pick up the clergy who traveled by train to officiate over the dedication of St. Andrew's Church. Rev. Peter Becker of Mauston led the celebration of the Mass. The sermon in German was given by Rev. Peter Schnitzler of Cashton, followed by a sermon in English by Rev. Henry Flock of Sparta. Father Louis Wurst of Tomah also attended. Father Wurst, pastor of St. Mary's Parish from 1893 to 1924, agreed to say Mass once a month at St. Andrew's Settlement for a stipend of $36. It was recorded that the parish paid him $50 instead.
The parish was originally a mission of St. Joseph's Catholic Church in Fairview. Priests serving the mission parish in its early history included Fathers Lorenz Trompeter, John Ellmaurer, Rudolph Raschke and George Hardy.
John Smrekar of Millston recalled when Fathers Raschke and Hardy came on the first Tuesday of each month to conduct Mass. Father Raschke first traveled by train. Later he owned a car, as did Father Hardy. The pastors stayed two weeks during the summer to instruct the children prior to their First Communion.
Over the years, the community was served monthly by priests from Tomah, Black River Falls, Neillsville and other nearby towns. In the mid 1930s, the federal government purchased the land where the church stood as part of the Resettlement Administration's effort to move farmers residing on marginal land to more productive areas. After the families in St. Andrew’s Settlement moved, the church was no longer used.
|
import os, sys, signal, time, traceback, threading
import wasanbon
from wasanbon.core.plugins import PluginFunction, manifest
ev = threading.Event()
endflag = False
class Plugin(PluginFunction):
""" Manage RT-Component in Package """
def __init__(self):
#PluginFunction.__init__(self)
super(Plugin, self).__init__()
pass
def depends(self):
return ['admin.environment',
'admin.package',
'admin.rtc',
'admin.rtcconf',
'admin.rtcprofile',
'admin.builder',
'admin.systeminstaller',
'admin.systemlauncher',
'admin.editor']
#@property
#def rtc(self):
# import rtc
# return rtc
def _print_rtcs(self, args):
pack = admin.package.get_package_from_path(os.getcwd())
rtcs = admin.rtc.get_rtcs_from_package(pack)
for r in rtcs:
print r.rtcprofile.basicInfo.name
@manifest
def list(self, args):
""" List RTC in current Package """
self.parser.add_option('-l', '--long', help='Long Format (default=False)', default=False, action='store_true', dest='long_flag')
self.parser.add_option('-d', '--detail', help='Long Format (default=False)', default=False, action='store_true', dest='detail_flag')
options, argv = self.parse_args(args)
verbose = options.verbose_flag
long = options.long_flag
detail = options.detail_flag
if detail: long = True
#package = wasanbon.plugins.admin.package.package
#package = admin.package
#admin_rtc = admin.rtc.rtc
pack = admin.package.get_package_from_path(os.getcwd())
rtcs = admin.rtc.get_rtcs_from_package(pack, verbose=verbose)
for r in rtcs:
if not long:
print ' - ' + r.rtcprofile.basicInfo.name
elif long:
print r.rtcprofile.basicInfo.name + ' : '
print ' basicInfo : '
print ' description : ' + r.rtcprofile.basicInfo.description
print ' category : ' + r.rtcprofile.basicInfo.category
print ' vendor : ' + r.rtcprofile.basicInfo.vendor
if len(r.rtcprofile.dataports):
print ' dataports : '
for d in r.rtcprofile.dataports:
if not detail:
print ' - ' + d.name
else:
print ' ' + d.name + ':'
#print ' name : ' + d.name
print ' portType : ' + d.portType
print ' type : ' + d.type
if len(r.rtcprofile.serviceports):
print ' serviceports :'
for s in r.rtcprofile.serviceports:
if not detail:
print ' - ' + s.name
else:
print ' ' + s.name + ':'
#print ' name : ' + s.name
for i in s.serviceInterfaces:
print ' ' + i.name + ':'
print ' type : ' + i.type
print ' instanceName : ' + i.instanceName
if detail:
print ' language : '
print ' kind : ' + r.rtcprofile.language.kind
if long or detail:
print ''
return 0
@manifest
def build(self, args):
self.parser.add_option('-o', '--only', help='Build Only (Not Install) (default=False)', default=False, action='store_true', dest='only_flag')
self.parser.add_option('-s', '--standalone', help='Install Standalone Mode (default=False)', default=False, action='store_true', dest='standalone_flag')
options, argv = self.parse_args(args, self._print_rtcs)
verbose = options.verbose_flag
if sys.platform == 'win32':
if verbose: sys.stdout.write('# In Windows, always build with verbose option.\n')
verbose = True
only = options.only_flag
standalone = options.standalone_flag
wasanbon.arg_check(argv, 4)
pack = admin.package.get_package_from_path(os.getcwd())
if argv[3] == 'all':
rtcs = admin.rtc.get_rtcs_from_package(pack, verbose=verbose)
else:
rtcs = [admin.rtc.get_rtc_from_package(pack, argv[3], verbose=verbose)]
return_value_map = {}
retval = 0
for rtc in rtcs:
sys.stdout.write('# Building RTC (%s)\n' % rtc.rtcprofile.basicInfo.name)
ret, msg = admin.builder.build_rtc(rtc.rtcprofile, verbose=verbose)
return_value_map[rtc.rtcprofile.basicInfo.name] = ret
if not ret:
sys.stdout.write('## Failed.\n')
retval = -1
else:
sys.stdout.write('## Success.\n')
if not only:
if not standalone:
# Confirm if this rtc is
standalone_flag = admin.systeminstaller.is_installed(pack, rtc, verbose=verbose, standalone=True)
else:
standalone_flag = standalone
sys.stdout.write('## Installing RTC (standalone=%s).\n' % (standalone_flag is True))
admin.systeminstaller.install_rtc_in_package(pack, rtc, verbose=verbose, standalone=standalone_flag)
sys.stdout.write('### Success.\n')
if verbose:
sys.stdout.write('Build Summary:\n')
for key, value in return_value_map.items():
sys.stdout.write(' - Build RTC (' + key + ')' + ' '*(25-len(key)) + ('Success' if value else 'False') + '\n')
return retval
@manifest
def clean(self, args):
options, argv = self.parse_args(args, self._print_rtcs)
verbose = options.verbose_flag
if verbose: sys.stdout.write('# Cleanup RTCs\n')
wasanbon.arg_check(argv, 4)
pack = admin.package.get_package_from_path(os.getcwd(), verbose=verbose)
if argv[3] == 'all':
rtcs = admin.rtc.get_rtcs_from_package(pack, verbose=verbose)
else:
rtcs = [admin.rtc.get_rtc_from_package(pack, argv[3], verbose=verbose)]
retval = 0
for rtc in rtcs:
if verbose: sys.stdout.write('# Cleanuping RTC %s\n' % rtc.rtcprofile.basicInfo.name)
ret, msg = admin.builder.clean_rtc(rtc.rtcprofile, verbose=verbose)
if not ret:
retval = -1
return retval
@manifest
def delete(self, args):
""" Delete Package
# Usage $ wasanbon-admin.py package delete [PACK_NAME]"""
self.parser.add_option('-f', '--force', help='Force option (default=False)', default=False, action='store_true', dest='force_flag')
options, argv = self.parse_args(args[:], self._print_rtcs)
verbose = options.verbose_flag
force = options.force_flag
pack = admin.package.get_package_from_path(os.getcwd())
if argv[3] == 'all':
rtcs = admin.rtc.get_rtcs_from_package(pack, verbose=verbose)
else:
rtcs = [admin.rtc.get_rtc_from_package(pack, argv[3], verbose=verbose)]
import shutil
for rtc in rtcs:
if os.path.isdir(rtc.path):
sys.stdout.write('# Deleting RTC (%s)\n' % rtc.rtcprofile.basicInfo.name)
def remShut(*args):
import stat
func, path, _ = args
os.chmod(path, stat.S_IWRITE)
os.remove(path)
pass
shutil.rmtree(rtc.path, onerror = remShut)
@manifest
def edit(self, args):
""" Edit RTC with editor """
options, argv = self.parse_args(args[:], self._print_rtcs)
verbose = options.verbose_flag
pack = admin.package.get_package_from_path(os.getcwd())
rtc = admin.rtc.get_rtc_from_package(pack, argv[3], verbose=verbose)
admin.editor.edit_rtc(rtc, verbose=verbose)
@manifest
def run(self, args):
""" Run just one RTC """
options, argv = self.parse_args(args[:], self._print_rtcs)
verbose = options.verbose_flag
package = admin.package.get_package_from_path(os.getcwd())
rtc = admin.rtc.get_rtc_from_package(package, argv[3], verbose=verbose)
return self.run_rtc_in_package(package, rtc, verbose=verbose)
def run_rtc_in_package(self, package, rtc, verbose=False, background=False):
global endflag
endflag = False
def signal_action(num, frame):
print ' - SIGINT captured'
ev.set()
global endflag
endflag = True
pass
signal.signal(signal.SIGINT, signal_action)
if sys.platform == 'win32':
sys.stdout.write(' - Escaping SIGBREAK...\n')
signal.signal(signal.SIGBREAK, signal_action)
pass
sys.stdout.write('# Executing RTC %s\n' % rtc.rtcprofile.basicInfo.name)
rtcconf_path = package.rtcconf[rtc.rtcprofile.language.kind]
rtcconf = admin.rtcconf.RTCConf(rtcconf_path, verbose=verbose)
rtc_temp = os.path.join("conf", "rtc_temp.conf")
if os.path.isfile(rtc_temp):
os.remove(rtc_temp)
pass
rtcconf.sync(verbose=True, outfilename=rtc_temp)
admin.systeminstaller.uninstall_all_rtc_from_package(package, rtcconf_filename=rtc_temp, verbose=True)
admin.systeminstaller.install_rtc_in_package(package, rtc, rtcconf_filename=rtc_temp, copy_conf=False)
try:
admin.systemlauncher.launch_rtcd(package, rtc.rtcprofile.language.kind, rtcconf=rtc_temp, verbose=True)
if background:
return 0
while not endflag:
try:
time.sleep(0.1)
except IOError, e:
print e
pass
pass
pass
except:
traceback.print_exc()
return -1
if verbose: sys.stdout.write('## Exitting RTC Manager.\n')
admin.systemlauncher.exit_all_rtcs(package, verbose=verbose)
admin.systemlauncher.terminate_system(package, verbose=verbose)
return 0
def terminate_rtcd(self, package, verbose=False):
if verbose: sys.stdout.write('# Terminating RTCDs.\n')
admin.systemlauncher.exit_all_rtcs(package, verbose=verbose)
admin.systemlauncher.terminate_system(package, verbose=verbose)
return 0
@manifest
def download_profile(self, args):
""" Run just one RTC """
self.parser.add_option('-w', '--wakeuptimeout', help='Timeout of Sleep Function when waiting for the wakeup of RTC-Daemons', default=5, dest='wakeuptimeout', action='store', type='float')
options, argv = self.parse_args(args[:], self._print_rtcs)
verbose = options.verbose_flag
wakeuptimeout = options.wakeuptimeout
package = admin.package.get_package_from_path(os.getcwd())
rtc = admin.rtc.get_rtc_from_package(package, argv[3], verbose=verbose)
if self.run_rtc_in_package(package, rtc, verbose=verbose, background=True) != 0:
return -1
wasanbon.sleep(wakeuptimeout)
rtcp = admin.rtcprofile.create_rtcprofile(rtc, verbose=verbose)
print admin.rtcprofile.tostring(rtcp)
self.terminate_rtcd(package, verbose=verbose)
return 0
@manifest
def verify_profile(self, args):
""" Run just one RTC """
self.parser.add_option('-w', '--wakeuptimeout', help='Timeout of Sleep Function when waiting for the wakeup of RTC-Daemons', default=5, dest='wakeuptimeout', action='store', type='float')
options, argv = self.parse_args(args[:], self._print_rtcs)
verbose = options.verbose_flag
wakeuptimeout = options.wakeuptimeout
package = admin.package.get_package_from_path(os.getcwd())
sys.stdout.write('# Starting RTC.\n')
rtc = admin.rtc.get_rtc_from_package(package, argv[3], verbose=verbose)
if self.run_rtc_in_package(package, rtc, verbose=verbose, background=True) != 0:
return -1
wasanbon.sleep(wakeuptimeout)
sys.stdout.write('# Acquiring RTCProfile from Inactive RTC\n')
rtcp = admin.rtcprofile.create_rtcprofile(rtc, verbose=verbose)
self.terminate_rtcd(package, verbose=verbose)
sys.stdout.write('# Comparing Acquired RTCProfile and Existing RTCProfile.\n')
retval = admin.rtcprofile.compare_rtcprofile(rtc.rtcprofile, rtcp, verbose=verbose)
if retval:
sys.stdout.write('Failed.\n# RTCProfile must be updated.\n')
return -1
sys.stdout.write('Succeeded.\n# RTCProfile is currently matches to binary.\n')
return 0
@manifest
def update_profile(self, args):
""" Run just one RTC and compare the profile between the existing RTC.xml and launched RTC, then save RTC.xml """
self.parser.add_option('-f', '--file', help='RTCProfile filename (default="RTC.xml")', default='RTC.xml', dest='filename', action='store', type='string')
self.parser.add_option('-d', '--dryrun', help='Just output on console', default=False, dest='dry_flag', action='store_true')
self.parser.add_option('-w', '--wakeuptimeout', help='Timeout of Sleep Function when waiting for the wakeup of RTC-Daemons', default=5, dest='wakeuptimeout', action='store', type='float')
options, argv = self.parse_args(args[:], self._print_rtcs)
verbose = options.verbose_flag
dry = options.dry_flag
filename = options.filename
wakeuptimeout = options.wakeuptimeout
wasanbon.arg_check(argv, 4)
rtc_name = argv[3]
package = admin.package.get_package_from_path(os.getcwd())
sys.stdout.write('# Starting RTC.\n')
rtc = admin.rtc.get_rtc_from_package(package, rtc_name, verbose=verbose)
standalone = admin.systeminstaller.is_installed(package, rtc, standalone=True, verbose=verbose)
if standalone:
admin.systemlauncher.launch_standalone_rtc(package, rtc, stdout=True, verbose=verbose)
pass
else:
if self.run_rtc_in_package(package, rtc, verbose=verbose, background=True) != 0:
return -1
wasanbon.sleep(wakeuptimeout)
sys.stdout.write('# Acquiring RTCProfile from Inactive RTC\n')
rtcp = admin.rtcprofile.create_rtcprofile(rtc, verbose=verbose)
if standalone:
pass
else:
self.terminate_rtcd(package, verbose=verbose)
sys.stdout.write('# Comparing Acquired RTCProfile and Existing RTCProfile.\n')
retval = admin.rtcprofile.compare_rtcprofile(rtc.rtcprofile, rtcp, verbose=verbose)
if retval:
filepath = os.path.join(rtc.path, filename)
if not dry:
outstr = admin.rtcprofile.tostring(retval, pretty_print=True)
if outstr == None:
sys.stdout.write('# RTC Profile save failed.\n')
return -1
if os.path.isfile(filepath):
f = filepath + wasanbon.timestampstr()
os.rename(filepath, f)
pass
fout = open(filepath, 'w')
fout.write(outstr)
fout.close()
else:
sys.stdout.write(admin.rtcprofile.tostring(retval, pretty_print=True))
sys.stdout.write('Succeed.\n')
return 0
sys.stdout.write('Succeed.\n')
return 0
|
Release your inner James Bond and try your hand at a traditional Roulette table. Roulette is one of the oldest and best known of all casino games. The aim of the game is to guess in which numbered segment the ball will come to rest.
|
class Range(object):
def __init__(self, tpl):
self.start = int(tpl[0]) if tpl[0] is not None and tpl[0] != b'' else None
self.end = int(tpl[1]) if tpl[1] is not None and tpl[1] != b'' else None
if self.start is None and self.end is not None and self.end > 0:
self.end *= -1
def __str__(self):
return "Byte Range: {} - {}".format(self.start, self.end)
def __len__(self, cl=0):
if self.start is None and self.end is not None:
return self.end * -1
elif self.end is None:
return cl - self.start
return self.end - self.start + 1
def header(self):
r = ''
if self.start is not None:
r += '{}-'.format(self.start)
if self.end is not None:
r += "{}".format(self.end)
return r
def from_content(self, content):
""" Try and get the range from the supplied content. If it isn't possible,
return None.
:param content: The content stream to extract the range from.
:return: The extracted content or None.
"""
csz = len(content)
if self.end is not None:
if self.end < 0 and csz < self.end * -1:
print("not big enough")
return None
if self.end > csz:
print("end > content length")
return None
else:
if self.start > csz:
print("start > content length")
return None
if self.end is None:
return content[self.start:]
elif self.start is None:
return content[self.end:]
return content[self.start: self.end + 1]
def absolutes(self, clen):
start = self.start
if self.start is None:
if self.end < 0:
return clen + self.end, clen - 1
start = 0
end = clen - 1
if self.end is not None:
if self.end < 0:
end = clen + self.end - 1
else:
end = self.end - 1
if end < start:
end = start
return start, end
def absolute_range(self, clen):
start, end = self.absolutes(clen)
return "{}-{}/{}".format(start, end, clen)
|
The Winter Gardens boasts amazing sea views and a choice of luxurious reception suites, perfect for a glamorous day.
Built in 1927, the Winter Gardens Pavilion is an impressive landmark on the Weston-super-Mare seafront, sure to leave a lasting impression with its unique sunken ballroom and neo-Georgian details. The Winter Gardens Pavilion has a license to perform both weddings and civil ceremonies and is fully licensed to serve alcohol. All of the function rooms feature the latest conference technology, and the building has full WiFi coverage. Catering is available, with a variety of packages available which range from tea and coffee to full banquet-style dining. Menus can be created that are bespoke to your event. Weddings will benefit from our dedicated events team, which has many years of experience running a variety of events. The Pavilion is well connected across the South West, with excellent transport links to Bristol. It's on a bus route and about fifteen minutes’ walk from the train station. The Winter Gardens is also next to the town’s main multi-storey car park. The Winter Gardens' ballroom is one of the most spectacular event spaces in the UK. Its elegant sunken dancefloor is sure to leave a lasting impression on your guests, and help you create a memorable event. The ballroom features a state-of-the-art color changing lighting system, which enables you to dress the room with any color or variety of colors with a user-friendly touchscreen device. This opens up a range of possibilities for weddings, ceremonies.
Congratulations on your engagement and I am pleased you have decided to enquire with us to hold your special day. Please provide information on what you requirments are for the big day and we will get back to you in due course. Best Wishes, Simon Page.
|
# Copyright (c) 2016 Mirantis, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
import json
import jsonschema
from tempest.lib.common.utils import data_utils
from ironicclient.tests.functional import base
def _is_valid_json(json_response, schema):
"""Verify JSON is valid.
:param json_response: JSON response from CLI
:type json_response: string
:param schema: expected schema of response
:type json_response: dictionary
"""
try:
json_response = json.loads(json_response)
jsonschema.validate(json_response, schema)
except jsonschema.ValidationError:
return False
return True
class TestNodeJsonResponse(base.FunctionalTestBase):
"""Test JSON responses for node commands."""
node_schema = {
"type": "object",
"properties": {
"target_power_state": {"type": ["string", "null"]},
"extra": {"type": "object"},
"last_error": {"type": ["string", "null"]},
"updated_at": {"type": ["string", "null"]},
"maintenance_reason": {"type": ["string", "null"]},
"provision_state": {"type": "string"},
"clean_step": {"type": "object"},
"uuid": {"type": "string"},
"console_enabled": {"type": "boolean"},
"target_provision_state": {"type": ["string", "null"]},
"raid_config": {"type": "string"},
"provision_updated_at": {"type": ["string", "null"]},
"maintenance": {"type": "boolean"},
"target_raid_config": {"type": "string"},
"inspection_started_at": {"type": ["string", "null"]},
"inspection_finished_at": {"type": ["string", "null"]},
"power_state": {"type": ["string", "null"]},
"driver": {"type": "string"},
"reservation": {"type": ["string", "null"]},
"properties": {"type": "object"},
"instance_uuid": {"type": ["string", "null"]},
"name": {"type": ["string", "null"]},
"driver_info": {"type": "object"},
"created_at": {"type": "string"},
"driver_internal_info": {"type": "object"},
"chassis_uuid": {"type": ["string", "null"]},
"instance_info": {"type": "object"}
}
}
def setUp(self):
super(TestNodeJsonResponse, self).setUp()
self.node = self.create_node()
def test_node_list_json(self):
"""Test JSON response for nodes list."""
schema = {
"type": "array",
"items": {
"type": "object",
"properties": {
"instance_uuid": {"type": ["string", "null"]},
"maintenance": {"type": "boolean"},
"name": {"type": ["string", "null"]},
"power_state": {"type": ["string", "null"]},
"provision_state": {"type": "string"},
"uuid": {"type": "string"}}}
}
response = self.ironic('node-list', flags='--json',
params='', parse=False)
self.assertTrue(_is_valid_json(response, schema))
def test_node_show_json(self):
"""Test JSON response for node show."""
response = self.ironic('node-show', flags='--json', params='{0}'
.format(self.node['uuid']), parse=False)
self.assertTrue(_is_valid_json(response, self.node_schema))
def test_node_validate_json(self):
"""Test JSON response for node validation."""
schema = {
"type": "array",
"items": {
"type": "object",
"properties": {
"interface": {"type": ["string", "null"]},
"result": {"type": "boolean"},
"reason": {"type": ["string", "null"]}}}
}
response = self.ironic('node-validate', flags='--json',
params='{0}'.format(self.node['uuid']),
parse=False)
self.assertTrue(_is_valid_json(response, schema))
def test_node_show_states_json(self):
"""Test JSON response for node show states."""
schema = {
"type": "object",
"properties": {
"target_power_state": {"type": ["string", "null"]},
"target_provision_state": {"type": ["string", "null"]},
"last_error": {"type": ["string", "null"]},
"console_enabled": {"type": "boolean"},
"provision_updated_at": {"type": ["string", "null"]},
"power_state": {"type": ["string", "null"]},
"provision_state": {"type": "string"}
}
}
response = self.ironic('node-show-states', flags='--json',
params='{0}'.format(self.node['uuid']),
parse=False)
self.assertTrue(_is_valid_json(response, schema))
def test_node_create_json(self):
"""Test JSON response for node creation."""
schema = {
"type": "object",
"properties": {
"uuid": {"type": "string"},
"driver_info": {"type": "object"},
"extra": {"type": "object"},
"driver": {"type": "string"},
"chassis_uuid": {"type": ["string", "null"]},
"properties": {"type": "object"},
"name": {"type": ["string", "null"]},
}
}
response = self.ironic('node-create', flags='--json',
params='-d fake', parse=False)
self.assertTrue(_is_valid_json(response, schema))
def test_node_update_json(self):
"""Test JSON response for node update."""
node_name = data_utils.rand_name('test')
response = self.ironic('node-update', flags='--json',
params='{0} add name={1}'
.format(self.node['uuid'], node_name),
parse=False)
self.assertTrue(_is_valid_json(response, self.node_schema))
class TestDriverJsonResponse(base.FunctionalTestBase):
"""Test JSON responses for driver commands."""
def test_driver_list_json(self):
"""Test JSON response for drivers list."""
schema = {
"type": "array",
"items": {
"type": "object",
"properties": {
"name": {"type": "string"},
"hosts": {"type": "string"},
}}
}
response = self.ironic('driver-list', flags='--json', parse=False)
self.assertTrue(_is_valid_json(response, schema))
def test_driver_show_json(self):
"""Test JSON response for driver show."""
schema = {
"type": "object",
"properties": {
"name": {"type": "string"},
"hosts": {
"type": "array",
"items": {"type": "string"}}
}
}
drivers_names = self.get_drivers_names()
for driver in drivers_names:
response = self.ironic('driver-show', flags='--json',
params='{0}'.format(driver), parse=False)
self.assertTrue(_is_valid_json(response, schema))
def test_driver_properties_json(self):
"""Test JSON response for driver properties."""
schema = {
"type": "object",
"additionalProperties": {"type": "string"}
}
drivers_names = self.get_drivers_names()
for driver in drivers_names:
response = self.ironic('driver-properties', flags='--json',
params='{0}'.format(driver), parse=False)
self.assertTrue(_is_valid_json(response, schema))
|
To be fair, holiday rentals are nothing new, but they’re significantly more popular now than they were even a few years ago. Today’s travellers are more comfortable than they once were booking a week at a private home with an enviable location at their target destination. And with convenient services like Airbnb and candid review platforms such as TripAdvisor, it’s easier than ever for travellers to book a holiday cottage with confidence.
Naturally, this has left some well-established hoteliers scratching their heads and wondering how to stay relevant in an increasingly diverse and competitive marketplace. Hotels have countered the privacy of holiday lets with their own cosier services and facilities. Nooks and lounges, spa experiences and intimate dining opportunities are all part of an attempt to add a lifestyle element to current hospitality models.
Of course, for those who own holiday lets around the UK, soaring popularity comes with its own set of challenges. Property owners are looking for ways that they can make their holiday lets more attractive to customers. In this post, we’re going to look more closely at this trend and offer some tips on how you can make your holiday property more successful.
Before we go any further, let’s take a moment to consider a few specific reasons that holiday homes have become so popular. For starters, it’s worth noting that owners of holiday homes stand to make a lot of money in coming years. An article published in the Telegraph late last year pointed out that a two-bedroom cottage could earn landlords as much as £15,000 a year in extra income. That’s no small sum – and it’s boosted at least in part by the tax relief currently available for homes of this type.
Of course, no industry is going to be successful without an enthusiastic base of customers eager to take advantage of the offerings. Based on a survey released last year, TripAdvisor found that the number of people who intended to stay in a holiday rental over the coming year had grown by 7 per cent. This was specifically for the US market, but it certainly reflects a global movement.
In the so-called ‘sharing economy’, it’s easier than ever for private owners to advertise rooms online. They’re likely to accept less per night than a comparably sized hotel, which is enticing to customers. This, in turn, prompts well-established hospitality providers to look for cross-over opportunities so that they can cash in on this trend. As more seasoned hospitality players enter the holiday-homes market, the quality of the rooms and service is driven up even higher.
The more interest customers show in rentals of this nature, the harder large hospitality companies work to cash in on the potential market share. This drives up the quality of holiday rentals, boosting consumer interest even more. In other words, the cycle fuels itself.
As the quality of holiday rentals continues to rise, private owners who would like to continue cashing in on this opportunity have to look for ways to remain competitive. One way that they can compete is through price, as the price of a holiday cottage or villa is usually more competitive than a hotel unit of comparable size. But there’s much more to the art of hospitality than price and room size.
Today’s hotels are more concerned than ever with giving their customers a sublime night’s sleep. Westin’s ‘Heavenly Bed’ experience is a good example of this. The hotel giant has improved the quality of its mattresses and linens and is even offering these products for sale. Owners of holiday lets can compete by purchasing better mattresses for their rooms and even improving the quality of their linens.
Most hotel rentals focus on self-catering as a means of saving money on holiday. However, some customers may be interested in planning a special dinner or even renting the property to host an event or a family gathering. Consider looking for a small local catering business that you can call upon for occasions like this. You can also enhance the dining atmosphere with crisp table linens, serviettes and more ornate place settings.
Provide extra amenities in the property.
In a hospitality setting, the little things can make all the difference. Hotels have learned the importance of fitting out their rooms with extra amenities – everything from comfortable slippers and robes to in-room sound systems and iPod docking stations. The more creature comforts you can add, the better.
Stalbridge Linen Services can assist with some of the above. Our minimum subscription requirements are low, and we only require that our customers achieve a weekly invoice amount of £30. That means you don’t have to have that many rooms in order to take advantage of our services.
Allowing us to take care of your supply of bedsheets and towels is going to free your staff up to focus on other tasks. We see to the laundry and supply of your linens, so that you can in turn devote of your time and energy into creating a more perfect holiday experience for your customers. Contact us today to learn more.
|
import logging
import time
from datetime import datetime
import types
from .micromodels.fields import ModelCollectionField
from . import SlickConnection, SlickCommunicationError, Release, Build, BuildReference, Component, ComponentReference, \
Project, Testplan, Testrun, Testcase, RunStatus, Result, ResultStatus, LogEntry, Configuration, TestrunGroup, TestrunReference
def add_log_entry(self, message, level='DEBUG', loggername='', exceptionclassname='', exceptionmessage='', stacktrace=''):
entry = LogEntry()
entry.entryTime = int(round(time.time() * 1000))
entry.message = message
entry.level = level
entry.loggerName = loggername
entry.exceptionClassName = exceptionclassname
entry.exceptionMessage = exceptionmessage
entry.exceptionStackTrace = stacktrace
if not hasattr(self, 'log'):
self.log = []
self.log.append(entry)
def update_result(self):
self.connection.results(self).update()
def update_testrun(self):
if hasattr(self, 'summary'):
del self.summary
self.connection.testruns(self).update()
def add_file_to_result(self, filename, fileobj=None):
slickfile = self.connection.files.upload_local_file(filename, fileobj)
if not hasattr(self, 'files'):
self.files = []
self.files.append(slickfile)
self.update()
def make_result_updatable(result, connection):
result.connection = connection
result.update = types.MethodType(update_result, result)
result.add_file = types.MethodType(add_file_to_result, result)
result.add_log_entry = types.MethodType(add_log_entry, result)
def make_testrun_updatable(testrun, connection):
testrun.connection = connection
testrun.update = types.MethodType(update_testrun, testrun)
testrun.add_file = types.MethodType(add_file_to_result, testrun)
class SlickQA(object):
def __init__(self, url, project_name, release_name, build_name, test_plan=None, test_run=None, environment_name=None, test_run_group_name=None):
self.logger = logging.getLogger('slick-reporter.Slick')
self.slickcon = None
self.is_connected = False
self.project = None
self.environment = environment_name
self.component = None
self.componentref = None
self.testplan = test_plan
self.releaseref = None
self.release = release_name
self.build = build_name
self.buildref = None
self.testrun = test_run
self.testrunref = None
self.testrun_group = test_run_group_name
self.logqueue = []
self.init_connection(url)
if self.is_connected:
self.logger.debug("Initializing Slick...")
self.init_project(project_name)
self.init_release()
self.init_build()
self.init_testplan()
self.init_environment()
self.init_testrun()
self.init_testrungroup()
# TODO: if you have a list of test cases, add results for each with notrun status
def init_connection(self, url):
try:
self.logger.debug("Checking connection to server...")
self.slickcon = SlickConnection(url)
successful = self.verify_connection()
if not successful:
raise SlickCommunicationError(
"Unable to verify connection to {} by trying to access the version api".format(
self.slickcon.getUrl()))
self.is_connected = True
except SlickCommunicationError as se:
self.logger.error(se.message)
def verify_connection(self):
version = self.slickcon.version.findOne()
if version:
self.logger.debug("Successfully connected. Using version {}".format(version))
return True
self.logger.debug("Unable to connect. No version available.")
return False
def init_project(self, project, create=True):
self.logger.debug("Looking for project by name '{}'.".format(project))
try:
self.project = self.slickcon.projects.findByName(project)
except SlickCommunicationError as err:
self.logger.error("Error communicating with slick: {}".format(err.args[0]))
if self.project is None and create:
self.logger.error("Unable to find project with name '{}', creating...".format(self.project))
self.project = Project()
self.project.name = project
self.project = self.slickcon.projects(self.project).create()
assert (isinstance(self.project, Project))
self.logger.info("Using project with name '{}' and id: {}.".format(self.project.name, self.project.id))
def init_release(self):
release_name = self.release
self.logger.debug("Looking for release '{}' in project '{}'".format(release_name, self.project.name))
if not hasattr(self.project, 'releases'):
self.project.releases = []
for release in self.project.releases:
assert isinstance(release, Release)
if release.name == release_name:
self.logger.info("Found Release '{}' with id '{}' in Project '{}'.".format(release.name, release.id,
self.project.id))
self.release = release
self.releaseref = release.create_reference()
break
else:
self.logger.info("Adding release {} to project {}.".format(release_name, self.project.name))
release = Release()
release.name = release_name
self.release = self.slickcon.projects(self.project).releases(release).create()
assert isinstance(self.release, Release)
self.project = self.slickcon.projects(self.project).get()
self.releaseref = self.release.create_reference()
self.logger.info("Using newly created release '{}' with id '{}' in Project '{}'.".format(self.release.name,
self.release.id, self.project.name))
def init_build(self):
build_number = self.build
if not hasattr(self.release, 'builds'):
self.release.builds = []
for build in self.release.builds:
if build.name == build_number:
self.logger.debug("Found build with name '{}' and id '{}' on release '{}'".format(build.name, build.id,
self.release.name))
self.buildref = build.create_reference()
break
else:
self.logger.info("Adding build {} to release {}.".format(build_number, self.release.name))
build = Build()
build.name = build_number
build.built = datetime.now()
self.buildref = (
self.slickcon.projects(self.project).releases(self.release).builds(build).create()).create_reference()
assert isinstance(self.buildref, BuildReference)
self.logger.info("Using newly created build '{}' with id '{}' in Release '{}' in Project '{}'.".format(
self.buildref.name, self.buildref.buildId, self.release.name, self.project.name))
def get_component(self, component_name):
self.logger.debug("Looking for component with name '{}' in project '{}'".format(component_name, self.project.name))
for comp in self.project.components:
if comp.name == component_name:
assert isinstance(comp, Component)
self.logger.info("Found component with name '{}' and id '{}' in project '{}'.".format(comp.name, comp.id,
self.project.name))
self.component = comp
self.componentref = self.component.create_reference()
assert isinstance(self.componentref, ComponentReference)
return self.component
def create_component(self, component_name):
self.logger.info("Adding component {} to project {}.".format(component_name, self.project.name))
component = Component()
component.name = component_name
component.code = component_name.replace(" ", "-")
self.component = self.slickcon.projects(self.project).components(component).create()
self.project.components.append(self.component)
self.componentref = self.component.create_reference()
self.logger.info("Using newly created component '{}' with id '{}' in project '{}'.".format(
self.component.name, self.component.id, self.project.name))
return self.component
def init_testplan(self):
if self.testplan:
testplan_name = self.testplan
testplan = self.slickcon.testplans.findOne(projectid=self.project.id, name=testplan_name)
if testplan is None:
self.logger.debug("Creating testplan with name '{}' connected to project '{}'.".format(testplan_name,
self.project.name))
testplan = Testplan()
testplan.name = testplan_name
testplan.project = self.project.create_reference()
testplan.isprivate = False
testplan.createdBy = "slickqa-python"
testplan = self.slickcon.testplans(testplan).create()
self.logger.info("Using newly create testplan '{}' with id '{}'.".format(testplan.name, testplan.id))
else:
self.logger.info("Found (and using) existing testplan '{}' with id '{}'.".format(testplan.name, testplan.id))
self.testplan = testplan
else:
self.logger.warn("No testplan specified for the testrun.")
def init_environment(self):
if self.environment is not None:
env = self.slickcon.configurations.findOne(name=self.environment, configurationType="ENVIRONMENT")
if env is None:
env = Configuration()
env.name = self.environment
env.configurationType = "ENVIRONMENT"
env = self.slickcon.configurations(env).create()
self.environment = env
def init_testrun(self):
testrun = Testrun()
if self.testrun is not None:
testrun.name = self.testrun
else:
if self.testplan is not None:
testrun.name = self.testplan.name
else:
testrun.name = 'Tests run from slick-python'
if self.testplan is not None:
testrun.testplanId = self.testplan.id
testrun.project = self.project.create_reference()
testrun.release = self.releaseref
testrun.build = self.buildref
testrun.state = RunStatus.RUNNING
testrun.runStarted = int(round(time.time() * 1000))
if self.environment is not None and isinstance(self.environment, Configuration):
testrun.config = self.environment.create_reference()
self.logger.debug("Creating testrun with name {}.".format(testrun.name))
self.testrun = self.slickcon.testruns(testrun).create()
make_testrun_updatable(self.testrun, self.slickcon)
def init_testrungroup(self):
if self.testrun_group is not None:
trg = self.slickcon.testrungroups.findOne(name=self.testrun_group)
if trg is None:
trg = TestrunGroup()
trg.name = self.testrun_group
trg.testruns = []
trg.created = datetime.now()
trg = self.slickcon.testrungroups(trg).create()
self.testrun_group = self.slickcon.testrungroups(trg).add_testrun(self.testrun)
def add_log_entry(self, message, level='DEBUG', loggername='', exceptionclassname='', exceptionmessage='', stacktrace=''):
entry = LogEntry()
entry.entryTime = int(round(time.time() * 1000))
entry.message = message
entry.level = level
entry.loggerName = loggername
entry.exceptionClassName = exceptionclassname
entry.exceptionMessage = exceptionmessage
entry.exceptionStackTrace = stacktrace
self.logqueue.append(entry)
def finish_testrun(self):
assert isinstance(self.testrun, Testrun)
testrun = Testrun()
if self.testrun.name:
testrun.name = self.testrun.name
else:
testrun.name = 'Tests run from slick-python'
testrun.id = self.testrun.id
testrun.runFinished = int(round(time.time() * 1000))
testrun.state = RunStatus.FINISHED
self.logger.debug("Finishing testrun named {}, with id {}.".format(testrun.name, testrun.id))
self.slickcon.testruns(testrun).update()
# TODO: need to add logs, files, etc. to a result
def file_result(self, name, status=ResultStatus.FAIL, reason=None, runlength=0, testdata=None, runstatus=RunStatus.FINISHED):
test = None
if testdata is not None:
assert isinstance(testdata, Testcase)
if testdata.automationId:
test = self.slickcon.testcases.findOne(projectid=self.project.id, automationId=testdata.automationId)
if test is None and hasattr(testdata, 'automationKey') and testdata.automationKey is not None:
test = self.slickcon.testcases.findOne(projectid=self.project.id, automationKey=testdata.automationId)
if test is None:
test = self.slickcon.testcases.findOne(projectid=self.project.id, name=name)
if test is None:
self.logger.debug("Creating testcase with name '{}' on project '{}'.".format(name, self.project.name))
test = Testcase()
if testdata is not None:
test = testdata
test.name = name
test.project = self.project.create_reference()
test = self.slickcon.testcases(test).create()
self.logger.info("Using newly created testcase with name '{}' and id '{}' for result.".format(name, test.id))
else:
if testdata is not None:
# update the test with the data passed in
assert isinstance(test, Testcase)
testdata.id = test.id
testdata.name = name
testdata.project = self.project.create_reference()
test = self.slickcon.testcases(testdata).update()
self.logger.info("Found testcase with name '{}' and id '{}' for result.".format(test.name, test.id))
result = Result()
result.testrun = self.testrun.create_reference()
result.testcase = test.create_reference()
result.project = self.project.create_reference()
result.release = self.releaseref
result.build = self.buildref
if self.component is not None:
result.component = self.componentref
if len(self.logqueue) > 0:
result.log = []
result.log.extend(self.logqueue)
self.logqueue[:] = []
result.reason = reason
result.runlength = runlength
result.end = int(round(time.time() * 1000))
result.started = result.end - result.runlength
result.status = status
result.runstatus = runstatus
self.logger.debug("Filing result of '{}' for test with name '{}'".format(result.status, result.testcase.name))
result = self.slickcon.results(result).create()
self.logger.info("Filed result of '{}' for test '{}', result id: {}".format(result.status, result.testcase.name,
result.id))
make_result_updatable(result, self.slickcon)
return result
|
vb.2win.live Roblox Robux Tix Hack Tool are created to assisting you to when actively playing vb.2win.live Roblox quickly. Its making resources about Robux and Tix which has a ton volumes availabe daily. We are going to apologies that we all could hardly offer unlimited quantity yet. But shouldn't be anxious, how many quantity are still very much more than enough to experience vb.2win.live Roblox conveniently with out shopping for any kind of buy-in-app-services which the developer's provide.
vb.2win.live Roblox Generator Script Latest Version (With New Version updated on (time)).
Why You Have to Use this vb.2win.live Roblox Hack Tool?
This kind of vb.2win.live Roblox Cheat Tool are generally produced and analyzed by our own exclusive team. Performing effectively with Android-phone, Tablets (any android version), apple iphone, ipad device, iPad Mini plus all other. Like we have explained above, you don't have to improve your own gadget's system (i. e. root, jailbreak). Designed with great user friendly interface which will make you easy to use vb.2win.live Roblox Generator Tool. With Anti Ban™ safeguard, your own vb.2win.live Roblox Account is going to be as harmless as play generally. The host is actually performing DAY TO DAY, thus don't bother about the moment you are going to make use of this vb.2win.live Roblox Cheat Tool.
First, we guarantee that our website is safe. we are going to not offer your device a virus. we are going to not hack into your account nor steal your personal info. The developers of our website square measure thus sure-handed that they were bound that it was made to serve only one purpose: to get and provides vb.2win.live Roblox Robux and Tix.
vb.2win.live Roblox Hack Generator Online is completely free Robux and Tix that you just can generate will not value you anything, no matter what proportion of them you create. we have a tendency to area unit here to help you boost your gameplay, and to help you beat all your competitors.
Lastly, vb.2win.live Roblox Hack Online is extremely straightforward to use, and can not waste some time. we tend to perpetually confirm that the positioning doesnt have any distractions corresponding to pop-up ads, and click bait articles. we have a tendency to additionally dont need you to transfer anything; everything is finished through the web site.
TOTAL Monthly :【1554735 Robux】and【2366096 Tix】 vb.2win.live Roblox 100% FREE, Fast and Easy. Top Best (Latest) The Best Working Hacks for Android and iOS games - No Human Verification, vb.2win.live Roblox Hack generator just require 7 minutes get completely free.
|
#!/usr/bin/env python
# CPIP is a C/C++ Preprocessor implemented in Python.
# Copyright (C) 2008-2017 Paul Ross
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License along
# with this program; if not, write to the Free Software Foundation, Inc.,
# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA.
#
# Paul Ross: apaulross@gmail.com
"""Writes XML and XHTML."""
__author__ = 'Paul Ross'
__date__ = '2009-09-15'
__rights__ = 'Copyright (c) Paul Ross'
import logging
#import traceback
import sys
#import htmlentitydefs
import base64
from cpip import ExceptionCpip
#: Global flag that sets the error behaviour
#:
#: If ``True`` then this module may raise an ``ExceptionXml`` and that might mask other
#: exceptions.
#:
#: If ``False`` no ExceptionXml will be raised but a ``logging.error(...)``
#: will be written. These will not mask other Exceptions.
RAISE_ON_ERROR = True
class ExceptionXml(ExceptionCpip):
"""Exception specialisation for the XML writer."""
pass
class ExceptionXmlEndElement(ExceptionXml):
"""Exception specialisation for end of element."""
pass
#####################################
# Section: Encoding/decoding methods.
#####################################
def encodeString(theS, theCharPrefix='_'):
"""Returns a string that is the argument encoded.
From RFC3548:
.. code-block:: text
Table 1: The Base 64 Alphabet
Value Encoding Value Encoding Value Encoding Value Encoding
0 A 17 R 34 i 51 z
1 B 18 S 35 j 52 0
2 C 19 T 36 k 53 1
3 D 20 U 37 l 54 2
4 E 21 V 38 m 55 3
5 F 22 W 39 n 56 4
6 G 23 X 40 o 57 5
7 H 24 Y 41 p 58 6
8 I 25 Z 42 q 59 7
9 J 26 a 43 r 60 8
10 K 27 b 44 s 61 9
11 L 28 c 45 t 62 +
12 M 29 d 46 u 63 /
13 N 30 e 47 v
14 O 31 f 48 w (pad) =
15 P 32 g 49 x
16 Q 33 h 50 y
See section 3 of : http://www.faqs.org/rfcs/rfc3548.html
:param theS: The string to be encoded.
:type theS: ``str``
:param theCharPrefix: A character to prefix the string.
:type theCharPrefix: ``str``
:returns: ``str`` -- Encoded string.
"""
if len(theCharPrefix) != 1:
errMsg = 'Prefix for encoding string must be a single character, not "%s"' % theCharPrefix
if RAISE_ON_ERROR:
raise ExceptionXml(errMsg)
logging.error(errMsg)
if sys.version_info[0] == 2:
myBy = bytes(theS)
retVal = base64.b64encode(myBy)
elif sys.version_info[0] == 3:
myBy = bytes(theS, 'ascii')
retVal = base64.b64encode(myBy).decode()
else:
assert 0, 'Unknown Python version %d' % sys.version_info.major
# if isinstance(theS, str):
# retVal = base64.b64encode(bytes(theS, 'ascii')).decode()
# else:
# retVal = base64.b64encode(theS)
# retVal = base64.b64encode(myBy)
# post-fix base64
retVal = retVal.replace('+', '-') \
.replace('/', '.') \
.replace('=', '_')
# Lead with prefix
return theCharPrefix + retVal
def decodeString(theS):
"""Returns a string that is the argument decoded. May raise a TypeError."""
# pre-fix base64
temp = theS[1:].replace('-', '+') \
.replace('.', '/') \
.replace('_', '=')
temp = base64.b64decode(temp)
return temp
def nameFromString(theStr):
"""Returns a name from a string.
See http://www.w3.org/TR/1999/REC-html401-19991224/types.html#type-cdata
"ID and NAME tokens must begin with a letter ([A-Za-z]) and may be
followed by any number of letters, digits ([0-9]), hyphens ("-"),
underscores ("_"), colons (":"), and periods (".").
This also works for in namespaces as ':' is not used in the encoding.
:param theStr: The string to be encoded.
:type theStr: ``str``
:returns: ``str`` -- Encoded string."""
return encodeString(theStr, 'Z')
#################################
# End: Encoding/decoding methods.
#################################
#############################
# Section: XML Stream writer.
#############################
class XmlStream(object):
"""Creates and maintains an XML output stream."""
INDENT_STR = u' '
ENTITY_MAP = {
ord('<') : u'<',
ord('>') : u'>',
ord('&') : u'&',
ord("'") : u''',
ord('"') : u'"',
}
def __init__(self, theFout, theEnc='utf-8', theDtdLocal=None, theId=0, mustIndent=True):
"""Initialise with a writable file like object or a file path.
:param theFout: The file-like object or a path as a string. If the latter it
will be closed on __exit__.
:type theFout: ``_io.TextIOWrapper, str``
:param theEnc: The encoding to be used.
:type theEnc: ``str``
:param theDtdLocal: Any local DTD as a string.
:type theDtdLocal: ``NoneType``, ``str``
:param theId: An integer value to use as an ID string.
:type theId: ``int``
:param mustIndent: Flag, if True the elements will be indented (pretty printed).
:type mustIndent: ``bool``
:returns: ``NoneType``
"""
if isinstance(theFout, str):
self._file = open(theFout, 'w')
self._fileClose = True
else:
self._file = theFout
self._fileClose = False
self._enc = theEnc
self._dtdLocal = theDtdLocal
# Stack of strings
self._elemStk = []
self._inElem = False
self._canIndentStk = []
# An integer that represents a unique ID
self._intId = theId
self._mustIndent = mustIndent
@property
def id(self):
"""A unique ID in this stream. The ID is incremented on each call.
:returns: ``str`` -- The ID."""
self._intId += 1
return '%d' % (self._intId-1)
@property
def _canIndent(self):
"""Returns True if indentation is possible (no mixed content etc.).
:returns: ``bool`` -- True if the element can be indented."""
for b in self._canIndentStk:
if not b:
return False
return True
def _flipIndent(self, theBool):
"""Set the value at the tip of the indent stack to the given value.
:param theBool: Flag for indenting.
:type theBool: ``bool``
:returns: ``NoneType``
"""
assert(len(self._canIndentStk) > 0)
self._canIndentStk.pop()
self._canIndentStk.append(theBool)
def xmlSpacePreserve(self):
"""Suspends indentation for this element and its descendants.
:returns: ``NoneType``"""
if len(self._canIndentStk) == 0:
errMsg = 'xmlSpacePreserve() on empty stack.'
if RAISE_ON_ERROR:
raise ExceptionXml(errMsg)
logging.error(errMsg)
self._flipIndent(False)
def startElement(self, name, attrs):
"""Opens a named element with attributes.
:param name: Element name.
:type name: ``str``
:param attrs: Element attributes.
:type attrs: ``dict({str : [str]}), dict({})``
:returns: ``NoneType``"""
self._closeElemIfOpen()
self._indent()
self._file.write(u'<%s' % name)
kS = sorted(attrs.keys())
for k in kS:
self._file.write(u' %s="%s"' % (k, self._encode(attrs[k])))
self._inElem = True
self._canIndentStk.append(self._mustIndent)
self._elemStk.append(name)
def characters(self, theString):
"""Encodes the string and writes it to the output.
:param theString: The content.
:type theString: ``str``
:returns: ``NoneType``
"""
self._closeElemIfOpen()
encStr = self._encode(theString)
self._file.write(encStr)
# mixed content - don't indent
self._flipIndent(False)
def literal(self, theString):
"""Writes theString to the output without encoding.
:param theString: The content.
:type theString: ``str``
:returns: ``NoneType``
"""
self._closeElemIfOpen()
self._file.write(theString)
# mixed content - don't indent
self._flipIndent(False)
def comment(self, theS, newLine=False):
"""Writes a comment to the output stream.
:param theS: The comment.
:type theS: ``str``
:param newLine: If True the comment is written on a new line, if False it is written inline.
:type newLine: ``bool``
:returns: ``NoneType``
"""
self._closeElemIfOpen()
if newLine:
self._indent()
self._file.write('<!--%s-->' % self._encode(theS))
# mixed content - don't indent
#self._flipIndent(False)
def pI(self, theS):
"""Writes a Processing Instruction to the output stream."""
self._closeElemIfOpen()
self._file.write('<?%s?>' % self._encode(theS))
self._flipIndent(False)
def endElement(self, name):
"""Ends an element.
:param name: Element name.
:type name: ``str``
:returns: ``NoneType``
"""
if len(self._elemStk) == 0:
errMsg = 'endElement() on empty stack'
if RAISE_ON_ERROR:
raise ExceptionXmlEndElement(errMsg)
logging.error(errMsg)
if name != self._elemStk[-1]:
errMsg = 'endElement("%s") does not match "%s"' \
% (name, self._elemStk[-1])
if RAISE_ON_ERROR:
raise ExceptionXmlEndElement(errMsg)
logging.error(errMsg)
myName = self._elemStk.pop()
if self._inElem:
self._file.write(u' />')
self._inElem = False
else:
self._indent()
self._file.write(u'</%s>' % myName)
self._canIndentStk.pop()
def writeECMAScript(self, theScript):
"""Writes the ECMA script.
Example:
.. code-block:: html
<script type="text/ecmascript">
//<![CDATA[
...
// ]]>
</script>
:param theData: The ECMA script content.
:type theData: ``str``
:returns: ``NoneType``
"""
self.startElement('script', {'type' : "text/ecmascript"})
self.writeCDATA(theScript)
self.endElement('script')
def writeCDATA(self, theData):
"""Writes a CDATA section.
Example:
.. code-block:: html
<![CDATA[
...
]]>
:param theData: The CDATA content.
:type theData: ``str``
:returns: ``NoneType``
"""
self._closeElemIfOpen()
self.xmlSpacePreserve()
self._file.write(u'')
self._file.write(u'\n<![CDATA[\n')
self._file.write(theData)
self._file.write(u'\n]]>\n')
def writeCSS(self, theCSSMap):
"""Writes a style sheet as a CDATA section. Expects a dict of dicts.
Example:
.. code-block:: html
<style type="text/css"><![CDATA[
...
]]></style>
:param theCSSMap: Map of CSS elements.
:type theCSSMap: ``dict({str : [dict({str : [str]}), dict({str : [str]})]})``
:returns: ``NoneType``
"""
self.startElement('style', {'type' : "text/css"})
theLines = []
for style in sorted(theCSSMap.keys()):
theLines.append('%s {' % style)
for attr in sorted(theCSSMap[style].keys()):
theLines.append('%s : %s;' % (attr, theCSSMap[style][attr]))
theLines.append('}')
self.writeCDATA(u'\n'.join(theLines))
self.endElement('style')
def _indent(self, offset=0):
"""Write out the indent string.
:param offset: The offset.
:type offset: ``int``
:returns: ``NoneType``
"""
if self._canIndent:
self._file.write(u'\n')
self._file.write(self.INDENT_STR*(len(self._elemStk)-offset))
def _closeElemIfOpen(self):
"""Close the element if open.
:returns: ``NoneType``
"""
if self._inElem:
self._file.write(u'>')
self._inElem = False
def _encode(self, theStr):
""""Apply the XML encoding such as ``'<'`` to ``'<'``
:param theStr: String to encode.
:type theStr: ``str``
:returns: ``str`` -- Encoded string.
"""
if sys.version_info.major == 2:
# Python 2 clunkiness
result = []
for c in theStr:
try:
result.append(self.ENTITY_MAP[ord(c)])
except KeyError:
result.append(c)
return u''.join(result)
else:
assert sys.version_info.major == 3
return theStr.translate(self.ENTITY_MAP)
def __enter__(self):
"""Context manager support.
:returns: ``cpip.plot.SVGWriter.SVGWriter,cpip.util.XmlWrite.XhtmlStream`` -- self"""
self._file.write(u"<?xml version='1.0' encoding=\"%s\"?>" % self._enc)
# Write local DTD?
return self
def __exit__(self, exc_type, exc_value, traceback):
"""Context manager support.
:param excType: Exception type, if raised.
:type excType: ``NoneType``
:param excValue: Exception, if raised.
:type excValue: ``NoneType``
:param tb: Traceback, if raised.
:type tb: ``NoneType``
:returns: ``NoneType``
"""
while len(self._elemStk):
self.endElement(self._elemStk[-1])
self._file.write(u'\n')
if self._fileClose:
self._file.close()
return False
#############################
# End: XML Stream writer.
#############################
###############################
# Section: XHTML Stream writer.
###############################
class XhtmlStream(XmlStream):
"""Specialisation of an XmlStream to handle XHTML."""
def __enter__(self):
"""Context manager support.
:returns: ``cpip.util.XmlWrite.XhtmlStream`` -- self
"""
super(XhtmlStream, self).__enter__()
self._file.write(u"""\n<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">""")
self.startElement(
'html',
{
'xmlns' : 'http://www.w3.org/1999/xhtml',
'xml:lang' : 'en',
'lang' : 'en',
}
)
return self
def charactersWithBr(self, sIn):
"""Writes the string replacing any ``\\n`` characters with ``<br/>`` elements.
:param sIn: The string to write.
:type sIn: ``str``
:returns: ``NoneType``
"""
while len(sIn) > 0:
i = sIn.find('\n')
if i != -1:
self.characters(sIn[:i])
with Element(self, 'br'):
pass
sIn = sIn[i+1:]
else:
self.characters(sIn)
break
###############################
# Section: XHTML Stream writer.
###############################
##################################
# Section: Element for any writer.
##################################
class Element(object):
"""Represents an element in a markup stream."""
def __init__(self, theXmlStream, theElemName, theAttrs=None):
"""Constructor.
:param theXmlStream: The XML stream.
:type theXmlStream: ``cpip.plot.SVGWriter.SVGWriter, cpip.util.XmlWrite.XhtmlStream``
:param theElemName: Element name.
:type theElemName: ``str``
:param theAttrs: Element attributes
:type theAttrs: ``NoneType, dict({str : [str]}), dict({})``
:returns: ``NoneType``
"""
self._stream = theXmlStream
self._name = theElemName
self._attrs = theAttrs or {}
def __enter__(self):
"""Context manager support.
:returns: ``cpip.plot.SVGWriter.SVGGroup,cpip.plot.SVGWriter.SVGLine,cpip.plot.SVGWriter.SVGRect,cpip.plot.SVGWriter.SVGText,cpip.util.XmlWrite.Element`` -- self
"""
# Write element and attributes to the stream
self._stream.startElement(self._name, self._attrs)
return self
def __exit__(self, excType, excValue, tb):
"""Context manager support.
TODO: Should respect RAISE_ON_ERROR here if excType is not None.
:param excType: Exception type, if raised.
:type excType: ``NoneType``
:param excValue: Exception, if raised.
:type excValue: ``NoneType``
:param tb: Traceback, if raised.
:type tb: ``NoneType``
:returns: ``NoneType``
"""
# if excType is not None:
# print('excType= ', excType)
# print('excValue= ', excValue)
# print('traceback=\n', '\n'.join(traceback.format_tb(tb)))
# Close element on the stream
self._stream.endElement(self._name)
#return True
|
I spent this afternoon recording a show on infertility for The Morning Show on TV3, to be broadcast at 11am next Monday 15th June. Also on the show are fertility expert, Zita West, and Sarah and Bob Keating, who have been TTC for two years.
Update: The show is here. I am on from 20 minutes, although Sarah and Bob and Zita West, who are on before me, are well worth a look.
I have also written an article on infertility for the Irish Independent magazine, which I think is due to run tomorrow (Saturday 13th June). I will update with a link if it is in tomorrow.
Update: This will be in next Saturday’s (27th June) Irish Independent magazine.
I was on Ireland AM on TV3 yesterday. It was a last-minute thing so I didn’t get a chance to tell anyone. The topic of discussion was older women and pregnancy, given that a 66-year-old woman is set to become Britain’s oldest mother. But mostly we talked about infertility and I was pleased to find a sympathetic, understanding approach in presenter, Sinead Desmond.
Baby is fine. Good scan today. I won’t use the word “relax” just yet, as it is impossible to do so with two kids, work, morning sickness and total exhaustion but those same things mean that life is good and the outlook is optimistic.
I was on the Seoige show on RTE1 last Wednesday (Wed 19 Nov – about 28 mins in).
Last Monday the Irish Independent reported that a Galway fertility clinic was refusing to treat unmarried couples. David Quinn of the Iona Institute was on to argue the case that married couples make better parents than unmarried ones and therefore should be the only recipients of fertility treatment and I was there to speak for normal people.
Infertility is a medical condition. It is not up to doctors to choose which patients they treat on the basis of their own religious beliefs or morals. If you think children’s lives are at risk from their parents’ marital status then outlaw it completely. Don’t pick on those with physical disabilities and make examples of them. Just cos you can.
Incidentally, these are the same doctors that won’t prescribe the morning after pill for women who don’t want a child, yet refuse treatment to those that are desperate for one. Guys, a little consistency is needed here if you want to be taken seriously.
Why wait for babies when you can fast-track with IVF?
According to today’s Guardian, couples with no fertility problems may be opting for IVF to cut out the time and hassle of babymaking. One of these couples may be Angelina Jolie and Brad Pitt who, apparently, didn’t have the time or patience to try the usual way. It is possible that this trend may be encouraged by fertility clinics that aim to improve their success rates by treating fertile young couples instead of infertile old crocs. This may be happening at a clinic near you, “although no one can put a figure on this phenomenon”.
What I would like to know is, where are these magical clinics where you can get an instant appointment and then get pregnant, almost guaranteed it seems, on your first go?
Gerry Ryan show went well today. You can listen here.
I will be on Ireland AM on TV3 tomorrow morning at 9am. I’ve been told not to wear black or white. Hmmm, nearly all my maternity clothes are black. Think I will have to race out and get something quickly. See you later!
I was going to give you a report on last night’s shenanigans but I much prefer Darragh’s version of events! Lovely to meet you Darragh and thanks for the kind words.
Thanks to everyone who came along and to those of you I met for the first time, it was great to put faces to names. I did feel a bit weird signing books, sorta like people were humouring me cos it was my party!
I’ve been doing a lot of media for the book this week. I didn’t post about it as I thought it was a bit “oooh, look at me” but hubby thinks people might be interested. So, there was a piece in yesterday’s Irish Times (subscription needed), I’m in this month’s PC Live! magazine, next month’s Image magazine parenting supplement, Woman’s Way magazine in a couple of weeks and the next RTE Guide. I’ve been doing the rounds of the regional radio stations last week and this and will be on Limerick Live 95FM (prerecorded) at 10am tomorrow morning, East Coast FM on Friday at 10am and Highland Radio on Monday at 12pm.
And last but not least, I will be on the Gerry Ryan show on 2FM tomorrow between 10 and 11am. I will be taking questions from callers so please call in with some easy ones!
Just to remind you all, my book is being launched TONIGHT!!!!! I got a copy of it yesterday and am pretty happy with it. I thought I might be afraid to open it in case all I’d see would be things I wanted to change but it went through a fairly rigorous proofreading process and we seem to have caught most things. So I actually enjoyed reading my own book! All the feedback I’ve had so far has been good and I’ve also had my first review – so far, so good.
So, maybe see you later!?!
|
# -*- coding: utf-8 -*-
#------------------------------------------------------------
# streamondemand - XBMC Plugin
# Conector para rocvideo
# http://www.mimediacenter.info/foro/viewforum.php?f=36
#------------------------------------------------------------
import re
from core import jsunpack
from core import logger
from core import scrapertools
def test_video_exists( page_url ):
logger.info("streamondemand.servers.rocvideo test_video_exists(page_url='%s')" % page_url)
return True,""
def get_video_url( page_url , premium = False , user="" , password="", video_password="" ):
logger.info("streamondemand.servers.rocvideo url="+page_url)
if not "embed" in page_url:
page_url = page_url.replace("http://rocvideo.tv/","http://rocvideo.tv/embed-") + ".html"
data = scrapertools.cache_page( page_url )
data = scrapertools.find_single_match(data,"<script type='text/javascript'>(eval\(function\(p,a,c,k,e,d.*?)</script>")
data = jsunpack.unpack(data)
logger.info("data="+data)
#file:"http://s1.rocvideo.tv/files/2/aqsk8q5mjcoh1d/INT3NS4HDTS-L4T.mkv.mp4
media_url = scrapertools.get_match(data,'file:"([^"]+)"')
video_urls = []
video_urls.append( [ scrapertools.get_filename_from_url(media_url)[-4:]+" [rocvideo]",media_url])
return video_urls
# Encuentra vídeos del servidor en el texto pasado
def find_videos(data):
# Añade manualmente algunos erróneos para evitarlos
encontrados = set()
devuelve = []
#http://rocvideo.net/mfhpecruzj2q
#http://rocvideo.tv/mfhpecruzj2q
patronvideos = 'rocvideo.(?:tv|net)/embed-([a-z0-9A-Z]+)'
logger.info("streamondemand.servers.rocvideo find_videos #"+patronvideos+"#")
matches = re.compile(patronvideos,re.DOTALL).findall(data)
for match in matches:
titulo = "[rocvideo]"
url = "http://rocvideo.tv/embed-"+match+".html"
if url not in encontrados:
logger.info(" url="+url)
devuelve.append( [ titulo , url , 'rocvideo' ] )
encontrados.add(url)
else:
logger.info(" url duplicada="+url)
patronvideos = 'rocvideo.(?:tv|net)/([a-z0-9A-Z]+)'
logger.info("streamondemand.servers.rocvideo find_videos #"+patronvideos+"#")
matches = re.compile(patronvideos,re.DOTALL).findall(data)
for match in matches:
titulo = "[rocvideo]"
url = "http://rocvideo.tv/embed-"+match+".html"
if url not in encontrados:
logger.info(" url="+url)
devuelve.append( [ titulo , url , 'rocvideo' ] )
encontrados.add(url)
else:
logger.info(" url duplicada="+url)
return devuelve
|
Ensure your project meets all of our Eligibility criteria.
Complete our Quick Eligibility Test.
Send the completed Application Form to the VINCI UK Foundation.
Please note that applications without VINCI employees support are more likely to succeed when received at the very beginning of the application period.
|
from random import *
from time import sleep
################### MODEL #############################
def collide_boxes(box1, box2):
x1, y1, w1, h1 = box1
x2, y2, w2, h2 = box2
return x1 < x2 + w2 and y1 < y2 + h2 and x2 < x1 + w1 and y2 < y1 + h1
class Model():
cmd_directions = {'up': (0, -1),
'down': (0, 1),
'left': (-1, 0),
'right': (1, 0)}
def __init__(self):
self.borders = [[0, 0, 2, 300],
[0, 0, 400, 2],
[398, 0, 2, 300],
[0, 298, 400, 2]]
self.pellets = [ [randint(10, 380), randint(10, 280), 5, 5]
for _ in range(4) ]
self.game_over = False
self.mydir = self.cmd_directions['down'] # start direction: down
self.mybox = [200, 150, 10, 10] # start in middle of the screen
def do_cmd(self, cmd):
if cmd == 'quit':
self.game_over = True
else:
self.mydir = self.cmd_directions[cmd]
def update(self):
# move me
self.mybox[0] += self.mydir[0]
self.mybox[1] += self.mydir[1]
# potential collision with a border
for b in self.borders:
if collide_boxes(self.mybox, b):
self.mybox = [200, 150, 10, 10]
# potential collision with a pellet
for index, pellet in enumerate(self.pellets):
if collide_boxes(self.mybox, pellet):
self.mybox[2] *= 1.2
self.mybox[3] *= 1.2
self.pellets[index] = [randint(10, 380), randint(10, 280), 5, 5]
|
| Annam-gourmet.com 000-N24 Exam Certification Exam.
is from the hands of Tang Yifei, he should make things bigger today, so that everyone knows that this can make people know that a girl did not bear the responsibility of the task, and he is the backbone of the Tang family. Red carpet h.
y physical condition you are very clear, since the small nine to bring us to believe that people, then I do this father of course They have the opportunity to let them try, but also hope that you 000-N24 Exam do a few uncle to help the small two IBM QRadar Technical Sales Mastery Test v1 of. Premium IBM 000-N24 Exam Guide.
Official IBM 000-N24 Certification. e expansion of the forces to the central area of Hedong City, last year to twenty thousand per year cheap rental lease to Ruan Qing cream, is because Weng Qing fancy Ruanqing cream character weakness, It can be convenient for her to ta.
will cost more than 22 million to buy coal, Qi Zhenjiang this one down the price although the single to see only one hundred and fifty, but the Tang 30,000 tons per month to calculate the amount of direct savings of 4.5 million This is. Full IBM 000-N24 Exam PDF.
Correct IBM 000-N24 PDF Download. hat, immediately start Side began to start, to see the corner of the opposite three or five mixed into IBM 000-N24 Exam a black Santana where the channeling a shadow, his heart can not help but music a bit, messenger speed quite fast, it seems that the.
r all, this girlfriend also appeared too suddenly. But a closer look, Su Xiaoran also think this is not impossible things, after all, is so good, there is a normal girlfriend is normal, at night perhaps between men and women do things. 100% Pass Guarantee IBM 000-N24 Exam PDF.
s very simple, that is, Qiu Yan in the hands of the body such as the soft sword Long Yuan This kind of thing in China that is the control tool According to the provisions of the Ministry of Public Security, the scope of the control of.
t also improvisation poem hoe Wo day when the afternoon, sweat drops under the soil, who knows the plate of Chinese food, Grain is hard. Fruit fruit good oh. Qin Waner clapped and said really powerful, I can not be a child so much. Fru.
his eyes staring at the Qin Waner that rough, a cynical son Lang Lang look like Road. You want to die Qin Waner brow frowned, the face exposed murderous, watched coldness. quickly flash people You want 350-018 IT Exam more, I mean time I killed you Hu.
refused, she stared at asked You just mean that the person will come here for a moment Qin Waner, I just say it, that person did not you think so simple. Qin Wan of course know the mind, she wanted to sit back and wait, directly took t. Up to date IBM 000-N24 Cert.
nt people involved Qin Waner, you are the police, you should put the interests of the people first Qin Wan was said no words to refute. Miss sister, fruit support father. Fruit is not ordinary people s children, she has realiz.
alled two women to come and drink with me Sable was scolded a look of helpless, if not for the honor of the ghost face Shura pointing one or two, he will not stay here by Tang Yifei call to call, to know that the past ten years in the. Up to date IBM 000-N24 Cert.
called a cool He just put on his clothes to go to Tang Jiu sound of good night, suddenly saw the door at the door there are personal shadow, silhouette jumped out of the door carefully, no sound, if not standing this angle just to see. Professional IBM 000-N24 Study Guide Book.
g as the people who want to dry on the line, thirty people swarmed, hold up the guy went to Who greeted Chapter 0052 fruit fruit trouble Within a short period of time, this group of Biao Heng Han was drifted to the river by , everyone.
to look at the owner of this buffet restaurant I help you prove that he said he double compensation for your loss after the card remember to tell his family to corpse. Oh mother, the owner of the cafeteria legs are soft, and this ye. Most Accurate IBM 000-N24 Study Guide Book.
New IBM 000-N24 Study Guide. g in bed a full ten minutes before deciding to take off his coat, again with bed with a pillow. , of course, did not fall asleep, his heart fiercely cursed himself a man is not really, actually again and again and again 712-50 PDF committed such.
night after you fainted, my father put Qiu HP0-D21 Dumps Yan sister back to Hedong City, and this should be back early in the morning, Qiu Yan sister should still Hedong, I m afraid we can not have breakfast with us. Ah Tang nine dumbfounded, what a. Premium 00M-665 PDF IBM 000-N24 Practise Questions.
e of a dozen thugs are also standing in the hall, but the face of Hong Hongning with HP0-D08 Dumps more P2090-076 Exam PDF than 50 people, for a time not to chaos, led by one threat This mother Is the face of the brother You do not want to mix it Shan Hongning heart o. Exhaustive IBM 000-N24 Exam.
up of sturdy Han Han, walked straight to the entrance hall. Hadrons overjoyed, C2180-270 PDF and quickly ran over to meet, is their spiritual pillar, 000-N24 Exam even if he stood here is not shot, that Hadron their heart is also the end, not just as frightened.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright 2015 Timu Eren <timu.eren@gmail.com>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from creative import Creative
REQUIRED_INLINE = ['AdSystem', 'AdTitle']
REQUIRED_WRAPPER = ['AdSystem', 'VASTAdTagURI']
def validateSettings(settings, requireds):
keys = settings.keys()
for required in requireds:
if required not in keys:
raise Exception("Missing required settings: {required}".format(required=required))
def validateInLineSettings(settings):
validateSettings(settings, REQUIRED_INLINE)
def validateWrapperSettings(settings):
validateSettings(settings, REQUIRED_WRAPPER)
class Ad(object):
def __init__(self, settings={}):
self.errors = []
self.surveys = []
self.impressions = []
self.creatives = []
if settings["structure"].lower() == 'wrapper':
validateWrapperSettings(settings)
self.VASTAdTagURI = settings["VASTAdTagURI"]
else:
validateInLineSettings(settings)
self.id = settings["id"]
self.sequence = settings.get("sequence", None)
self.structure = settings["structure"]
self.AdSystem = settings["AdSystem"]
self.AdTitle = settings["AdTitle"]
# optional elements
self.Error = settings.get("Error", None)
self.Description = settings.get("Description", None)
self.Advertiser = settings.get("Advertiser", None)
self.Pricing = settings.get("Pricing", None)
self.Extensions = settings.get("Extensions", None)
def attachSurvey(self, settings):
survey = {"url": settings.url}
if "type" in settings:
survey["type"] = settings["type"]
self.surveys.append(survey)
def attachImpression(self, settings):
self.impressions.append(settings)
return self
def attachCreative(self, _type, options):
creative = Creative(_type, options)
self.creatives.append(creative)
return creative
|
It's been a great summer so far at the historic Atlas Coal Mine in East Coulee, but staff are already looking ahead to the fall and winter.
The fourth and final stage of a restoration of the wooden tipple is scheduled to get underway.
"The tipple has been our big thing for a long time: it is Canada's last wooden tipple (and) we are the most complete historic coal mine in Canada, so we do have our priorities," explained Jay Russell, Curator at the Atlas Mine. "Once we have the tipple done, we have some other things to do and that's what we want to work on, is prioritizing what's the next big thing."
"When you have a seven and a half story structure made of wood, designed in the 1930's with specialty pieces, it does take a bit of work to restore it," he added. "Of course, the other thing is striking a balance, restoring it versus keeping its original integrity, so we only are replacing what we absolutely have to."
Work on restoring the 80 year old wooden tipple began in 2014 and this final phase will put a new roof on to protect what's already been done.
"We want to put some new interpretation inside the tipple, so there will be some new stuff going on in there," outlined Russell. "Hopefully, we'll have some new, exciting, engaging things, as early as this fall, going into the tipple. The stuff we're working on this year will be official and unveiled for next year's tours."
|
#!/usr/bin/env python2.3
"""
Read a table dump in the UCSC gene table format and print a tab separated
list of intervals corresponding to requested features of each gene.
usage: ucsc_gene_table_to_intervals.py [options]
options:
-h, --help show this help message and exit
-rREGION, --region=REGION
Limit to region: one of coding, utr3, utr5, transcribed [default]
-e, --exons Only print intervals overlapping an exon
-i, --input=inputfile input file
-o, --output=outputfile output file
"""
import optparse, string, sys
def main():
# Parse command line
parser = optparse.OptionParser( usage="%prog [options] " )
parser.add_option( "-r", "--region", dest="region", default="transcribed",
help="Limit to region: one of coding, utr3, utr5, transcribed [default]" )
parser.add_option( "-e", "--exons", action="store_true", dest="exons",
help="Only print intervals overlapping an exon" )
parser.add_option( "-s", "--strand", action="store_true", dest="strand",
help="Print strand after interval" )
parser.add_option( "-i", "--input", dest="input", default=None,
help="Input file" )
parser.add_option( "-o", "--output", dest="output", default=None,
help="Output file" )
options, args = parser.parse_args()
assert options.region in ( 'coding', 'utr3', 'utr5', 'transcribed' ), "Invalid region argument"
try:
out_file = open (options.output,"w")
except:
print >> sys.stderr, "Bad output file."
sys.exit(0)
try:
in_file = open (options.input)
except:
print >> sys.stderr, "Bad input file."
sys.exit(0)
print "Region:", options.region+";"
print "Only overlap with Exons:",
if options.exons:
print "Yes"
else:
print "No"
# Read table and handle each gene
for line in in_file:
try:
if line[0:1] == "#":
continue
# Parse fields from gene tabls
fields = line.split( '\t' )
chrom = fields[0]
tx_start = int( fields[1] )
tx_end = int( fields[2] )
name = fields[3]
strand = fields[5].replace(" ","_")
cds_start = int( fields[6] )
cds_end = int( fields[7] )
# Determine the subset of the transcribed region we are interested in
if options.region == 'utr3':
if strand == '-': region_start, region_end = tx_start, cds_start
else: region_start, region_end = cds_end, tx_end
elif options.region == 'utr5':
if strand == '-': region_start, region_end = cds_end, tx_end
else: region_start, region_end = tx_start, cds_start
elif options.region == 'coding':
region_start, region_end = cds_start, cds_end
else:
region_start, region_end = tx_start, tx_end
# If only interested in exons, print the portion of each exon overlapping
# the region of interest, otherwise print the span of the region
if options.exons:
exon_starts = map( int, fields[11].rstrip( ',\n' ).split( ',' ) )
exon_starts = map((lambda x: x + tx_start ), exon_starts)
exon_ends = map( int, fields[10].rstrip( ',\n' ).split( ',' ) )
exon_ends = map((lambda x, y: x + y ), exon_starts, exon_ends);
for start, end in zip( exon_starts, exon_ends ):
start = max( start, region_start )
end = min( end, region_end )
if start < end:
if strand: print_tab_sep(out_file, chrom, start, end, name, "0", strand )
else: print_tab_sep(out_file, chrom, start, end )
else:
if strand: print_tab_sep(out_file, chrom, region_start, region_end, name, "0", strand )
else: print_tab_sep(out_file, chrom, region_start, region_end )
except:
continue
def print_tab_sep(out_file, *args ):
"""Print items in `l` to stdout separated by tabs"""
print >>out_file, string.join( [ str( f ) for f in args ], '\t' )
if __name__ == "__main__": main()
|
This 3 bedroom, 1.5 bath colonial is ready for you to move right in! The first floor features hardwood floors throughout the living room and dining room. The huge kitchen can be used as an eat-in kitchen with plenty of cabinet and counter space. On the first floor you'll also find a full bathroom and a laundry room. On the second floor you'll find three bedrooms including the master bedroom with half bath. Outside is the fenced yard and patio space, perfect for entertaining.
|
# -*- coding: utf-8 -*-
# Form implementation generated from reading ui file 'ui/infostatus.ui'
#
# Created: Thu Jun 27 19:18:14 2013
# by: PyQt4 UI code generator 4.9.3
#
# WARNING! All changes made in this file will be lost!
from PyQt4 import QtCore, QtGui
try:
_fromUtf8 = QtCore.QString.fromUtf8
except AttributeError:
_fromUtf8 = lambda s: s
class Ui_InfoStatus(object):
def setupUi(self, InfoStatus):
InfoStatus.setObjectName(_fromUtf8("InfoStatus"))
InfoStatus.resize(350, 24)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Preferred, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(InfoStatus.sizePolicy().hasHeightForWidth())
InfoStatus.setSizePolicy(sizePolicy)
InfoStatus.setMinimumSize(QtCore.QSize(0, 0))
self.horizontalLayout = QtGui.QHBoxLayout(InfoStatus)
self.horizontalLayout.setSpacing(2)
self.horizontalLayout.setMargin(0)
self.horizontalLayout.setObjectName(_fromUtf8("horizontalLayout"))
self.val1 = QtGui.QLabel(InfoStatus)
self.val1.setMinimumSize(QtCore.QSize(40, 0))
self.val1.setText(_fromUtf8(""))
self.val1.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.val1.setObjectName(_fromUtf8("val1"))
self.horizontalLayout.addWidget(self.val1)
self.label1 = QtGui.QLabel(InfoStatus)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.label1.sizePolicy().hasHeightForWidth())
self.label1.setSizePolicy(sizePolicy)
self.label1.setFrameShape(QtGui.QFrame.NoFrame)
self.label1.setTextFormat(QtCore.Qt.AutoText)
self.label1.setScaledContents(False)
self.label1.setMargin(1)
self.label1.setObjectName(_fromUtf8("label1"))
self.horizontalLayout.addWidget(self.label1)
self.val2 = QtGui.QLabel(InfoStatus)
self.val2.setMinimumSize(QtCore.QSize(40, 0))
self.val2.setText(_fromUtf8(""))
self.val2.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.val2.setObjectName(_fromUtf8("val2"))
self.horizontalLayout.addWidget(self.val2)
self.label2 = QtGui.QLabel(InfoStatus)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.label2.sizePolicy().hasHeightForWidth())
self.label2.setSizePolicy(sizePolicy)
self.label2.setText(_fromUtf8(""))
self.label2.setObjectName(_fromUtf8("label2"))
self.horizontalLayout.addWidget(self.label2)
self.val3 = QtGui.QLabel(InfoStatus)
self.val3.setMinimumSize(QtCore.QSize(40, 0))
self.val3.setText(_fromUtf8(""))
self.val3.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.val3.setObjectName(_fromUtf8("val3"))
self.horizontalLayout.addWidget(self.val3)
self.label3 = QtGui.QLabel(InfoStatus)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.label3.sizePolicy().hasHeightForWidth())
self.label3.setSizePolicy(sizePolicy)
self.label3.setText(_fromUtf8(""))
self.label3.setObjectName(_fromUtf8("label3"))
self.horizontalLayout.addWidget(self.label3)
self.val4 = QtGui.QLabel(InfoStatus)
self.val4.setMinimumSize(QtCore.QSize(40, 0))
self.val4.setText(_fromUtf8(""))
self.val4.setAlignment(QtCore.Qt.AlignRight|QtCore.Qt.AlignTrailing|QtCore.Qt.AlignVCenter)
self.val4.setObjectName(_fromUtf8("val4"))
self.horizontalLayout.addWidget(self.val4)
self.label4 = QtGui.QLabel(InfoStatus)
sizePolicy = QtGui.QSizePolicy(QtGui.QSizePolicy.Minimum, QtGui.QSizePolicy.Preferred)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(self.label4.sizePolicy().hasHeightForWidth())
self.label4.setSizePolicy(sizePolicy)
self.label4.setText(_fromUtf8(""))
self.label4.setScaledContents(False)
self.label4.setObjectName(_fromUtf8("label4"))
self.horizontalLayout.addWidget(self.label4)
self.retranslateUi(InfoStatus)
QtCore.QMetaObject.connectSlotsByName(InfoStatus)
def retranslateUi(self, InfoStatus):
InfoStatus.setWindowTitle(_("Form"))
|
The Aerospace Corporation (Aerospace) has received a grant for $2.52 million from NASA’s Heliophysics Science Division to design, build, and operate two 1.5U CubeSats that will measure and study two features of the nighttime upper atmosphere over a one-year period. The two CubeSats are expected to launch late 2019/early 2020.
Rainer Weiss, Barry Barish, and Kip Thorne share Nobel Prize for research culminating in LIGO detection of gravitational waves.
The American Institute of Aeronautics and Astronautics (AIAA) is pleased to announce its Class of 2018 Associate Fellows. AIAA will formally honor and induct the class at its AIAA Associate Fellows Recognition Ceremony and Dinner on Monday, January 8, 2018, at the Gaylord Palms in Kissimmee, Florida, in conjunction with the 2018 AIAA Science and Technology Forum and Exposition (AIAA SciTech Forum), January 8–12.
Bill Possel, the director of LASP’s Mission Operations and Data Systems, will describe this unique student program and give updates on the latest results from Kepler. At LASP, October 4, 7:30 p.m.
Expedition 53 marks the first long-term increase in crew size from three to four on the U.S. segment of the International Space station, allowing NASA to maximize time dedicated to research in space. Experiments from a wide array of disciplines are already taking place aboard, including the latest round of the Veggie plant growth payload.
RS-25 flight engine E2063 is delivered and lifted into place onto the A-1 Test Stand at Stennis Space Center on Sept. 27 in preparation for an Oct. 19 hotfire test. Once tested and certified, the engine is scheduled to help power NASA’s new Space Launch System (SLS) on its Exploration Mission-2 (EM-2), which will be the first flight of the new rocket to carry humans.
Eugene N. Parker, professor emeritus at the University of Chicago, today visited the spacecraft that bears his name: NASA’s Parker Solar Probe. This is the first NASA mission that has been named for a living researcher, and is humanity’s first mission to the Sun.
The Colorado Center of Excellence (CoE) for Advanced Technology Aerial Firefighting has announced the launch of its online support application for testing unmanned aircraft systems (UAS) in public safety.
In the immediate aftermath of Hurricane Harvey, there was an overwhelming need for water rescues in Texas. Despite never formally conducting floodwater rescues as an organization, Team Rubicon was moved to action by the numerous requests for assistance. On August 28—just three days after Harvey made landfall—we deployed six boat-rescue teams of experienced volunteers to the Houston area.
According to the head of the Russian Academy of Sciences’ Space Research Institute, the launch of the ExoMars-2020 mission, which will send a European rover to the red planet, is scheduled for July 24, 2020.
Russia may adjust its federal space program to facilitate funding of the construction of a super-heavy-lift launch vehicle (SHLLV), General Director of Russia’s Rocket and Space Corporation Energia Vladimir Solntsev told Sputnik on Tuesday.
NASA announced that Old Dominion University researcher Ben Hamlington will serve as the agency’s Sea Level Change Team, or SLCT, for the next three years. The new SLCT consists of eight members selected from 20 research proposals. Hamlington’s research proposal, “Identifying, Quantifying and Projecting Decadal Sea Level Change” was chosen from five proposals to lead the team.
India’s Mars Orbiter Mission has now completed three years in orbit at Mars, and ISRO celebrated the anniversary by releasing the mission’s second-year data to the public.
Billionaires Are Leading The Space Race. What Will Trump’s NASA Do About Them?
Donald Trump’s first space council meeting happens on Thursday. It may pick a winner in the billionaire-backed race for the Moon and Mars.
The head of NASA’s human spaceflight program says he would like to see a decision made in the next two years on whether and how International Space Station operations will be extended beyond 2024. Bill Gerstenmaier, NASA associate administrator for human exploration and operations, and other representatives of ISS partner nations discussed that timeframe during a panel discussion at the 68th International Astronautical Congress here Sept. 27.
|
# This file is dual licensed under the terms of the Apache License, Version
# 2.0, and the BSD License. See the LICENSE file in the root of this repository
# for complete details.
from __future__ import absolute_import, division, print_function
import pytest
from cryptography.exceptions import _Reasons
from cryptography.hazmat.backends.interfaces import HMACBackend
from cryptography.hazmat.primitives import hashes
from cryptography.hazmat.primitives.twofactor import InvalidToken
from cryptography.hazmat.primitives.twofactor.totp import TOTP
from ....utils import (
load_nist_vectors, load_vectors_from_file, raises_unsupported_algorithm
)
vectors = load_vectors_from_file(
"twofactor/rfc-6238.txt", load_nist_vectors)
@pytest.mark.requires_backend_interface(interface=HMACBackend)
class TestTOTP(object):
@pytest.mark.supported(
only_if=lambda backend: backend.hmac_supported(hashes.SHA1()),
skip_message="Does not support HMAC-SHA1."
)
@pytest.mark.parametrize(
"params", [i for i in vectors if i["mode"] == b"SHA1"])
def test_generate_sha1(self, backend, params):
secret = params["secret"]
time = int(params["time"])
totp_value = params["totp"]
totp = TOTP(secret, 8, hashes.SHA1(), 30, backend)
assert totp.generate(time) == totp_value
@pytest.mark.supported(
only_if=lambda backend: backend.hmac_supported(hashes.SHA256()),
skip_message="Does not support HMAC-SHA256."
)
@pytest.mark.parametrize(
"params", [i for i in vectors if i["mode"] == b"SHA256"])
def test_generate_sha256(self, backend, params):
secret = params["secret"]
time = int(params["time"])
totp_value = params["totp"]
totp = TOTP(secret, 8, hashes.SHA256(), 30, backend)
assert totp.generate(time) == totp_value
@pytest.mark.supported(
only_if=lambda backend: backend.hmac_supported(hashes.SHA512()),
skip_message="Does not support HMAC-SHA512."
)
@pytest.mark.parametrize(
"params", [i for i in vectors if i["mode"] == b"SHA512"])
def test_generate_sha512(self, backend, params):
secret = params["secret"]
time = int(params["time"])
totp_value = params["totp"]
totp = TOTP(secret, 8, hashes.SHA512(), 30, backend)
assert totp.generate(time) == totp_value
@pytest.mark.supported(
only_if=lambda backend: backend.hmac_supported(hashes.SHA1()),
skip_message="Does not support HMAC-SHA1."
)
@pytest.mark.parametrize(
"params", [i for i in vectors if i["mode"] == b"SHA1"])
def test_verify_sha1(self, backend, params):
secret = params["secret"]
time = int(params["time"])
totp_value = params["totp"]
totp = TOTP(secret, 8, hashes.SHA1(), 30, backend)
assert totp.verify(totp_value, time) is None
@pytest.mark.supported(
only_if=lambda backend: backend.hmac_supported(hashes.SHA256()),
skip_message="Does not support HMAC-SHA256."
)
@pytest.mark.parametrize(
"params", [i for i in vectors if i["mode"] == b"SHA256"])
def test_verify_sha256(self, backend, params):
secret = params["secret"]
time = int(params["time"])
totp_value = params["totp"]
totp = TOTP(secret, 8, hashes.SHA256(), 30, backend)
assert totp.verify(totp_value, time) is None
@pytest.mark.supported(
only_if=lambda backend: backend.hmac_supported(hashes.SHA512()),
skip_message="Does not support HMAC-SHA512."
)
@pytest.mark.parametrize(
"params", [i for i in vectors if i["mode"] == b"SHA512"])
def test_verify_sha512(self, backend, params):
secret = params["secret"]
time = int(params["time"])
totp_value = params["totp"]
totp = TOTP(secret, 8, hashes.SHA512(), 30, backend)
assert totp.verify(totp_value, time) is None
def test_invalid_verify(self, backend):
secret = b"12345678901234567890"
time = 59
totp = TOTP(secret, 8, hashes.SHA1(), 30, backend)
with pytest.raises(InvalidToken):
totp.verify(b"12345678", time)
def test_floating_point_time_generate(self, backend):
secret = b"12345678901234567890"
time = 59.1
totp = TOTP(secret, 8, hashes.SHA1(), 30, backend)
assert totp.generate(time) == b"94287082"
def test_get_provisioning_uri(self, backend):
secret = b"12345678901234567890"
totp = TOTP(secret, 6, hashes.SHA1(), 30, backend=backend)
assert totp.get_provisioning_uri("Alice Smith", None) == (
"otpauth://totp/Alice%20Smith?digits=6&secret=GEZDGNBVG"
"Y3TQOJQGEZDGNBVGY3TQOJQ&algorithm=SHA1&period=30")
assert totp.get_provisioning_uri("Alice Smith", 'World') == (
"otpauth://totp/World:Alice%20Smith?digits=6&secret=GEZ"
"DGNBVGY3TQOJQGEZDGNBVGY3TQOJQ&algorithm=SHA1&issuer=World"
"&period=30")
def test_invalid_backend():
secret = b"12345678901234567890"
pretend_backend = object()
with raises_unsupported_algorithm(_Reasons.BACKEND_MISSING_INTERFACE):
TOTP(secret, 8, hashes.SHA1(), 30, pretend_backend)
|
The lyrical monopolist, Maupheen is back vibe the industry once more with this new music entitled 'Prime Time' featuring Northern Nigeria's popular bilingual lyrical puncher, B.O.C. Madaki and Adamawa's super talented Hip-hop rapperKarams.
Maupheen who is known for winning the #WhoYouEppCompetition hosted by YBNL's boss Olamide and the prestigious music platform in Africa NOTJUSTOK is said to be working on his EP, which we are yet to confirm the title.
The song 'Prime Time' is coming after the national premier of "NOTICE" produced by Lord Gabrielz sometime back.
Take some time to Download and listen to 'Prime Time'.
|
""" test the environment config """
import os
import unittest
from etl_framework.configs.job import JobConfig
class JobConfigTestCases(unittest.TestCase):
""" class for test cases """
JOB_CONFIG_FILEPATH = os.path.join(
os.path.dirname(__file__),
'fixtures/job.json'
)
def setUp(self):
self.job_config = JobConfig.create_from_filepath(
self.JOB_CONFIG_FILEPATH
)
def test_get_environment_configuration(self):
""" stuff """
# This is determined by the fixtures/job.json config
# and should be the value of "environment" key
expected_output = {
"config_dir": "fixtures",
"config_filename": "environment.json"
}
output = self.job_config.get_environment_configuration()
self.assertEqual(output, expected_output)
def test_get_environment_configuration_filepath(self):
""" stuff """
# This is determined by the fixtures/job.json config
expected_filepath = 'fixtures/environment.json'
filepath = self.job_config.get_environment_configuration_filepath()
self.assertEqual(filepath, expected_filepath)
|
Wild Life - Leschenault Catchment Council Inc.
A diverse array of wildlife species can be found in the Leschenault catchment. While some species such as the Western grey kangaroo are common, many others are now rarely seen or are restricted in range and have been afforded special protection status. Many of our local animals such as the Quokka (Setonix brachyurus) and the Western Ringtail possum (Pseudocheirus occidentalis) are considered unique to the south west area and are not found anywhere else in the world.
Migratory birds will travel from thousands of kilometres away to reside temporarily in our catchment.
From the majestic Wedge-tailed eagle (Aquila audax) and the cute Numbat (Myrmecobius fasciatus), to the highly venomous Tiger snake (Notechis scutatus) and noisy Motorbike frog (Litoria moorei), they all can be observed in the natural ecosystems within the Leschenault Catchment.
|
import copy
import numpy as np
m_prefs=np.array([[4,0,1,2,3],[1,2,0,3,4],[3,1,0,2,4]])
f_prefs=np.array([[0,1,2,3],[1,0,3,2],[1,2,0,3],[0,3,2,1]])
def array_to_dict(array):
dict = {}
for x, y in enumerate(array):
dict[x] = list(y)
return dict
def deferred_acceptance(m_prefs,f_prefs):
m_prefers = array_to_dict(m_prefs)
f_prefers = array_to_dict(f_prefs)
guys = sorted(m_prefers.keys())
gals = sorted(f_prefers.keys())
guysfree = guys[:]
engaged = {}
guyprefers2 = copy.deepcopy(m_prefers)
galprefers2 = copy.deepcopy(f_prefers)
while guysfree:
guy = guysfree.pop(0)
guyslist = guyprefers2[guy]
gal = guyslist.pop(0)
fiance = engaged.get(gal)
if not fiance:
# She's free
engaged[gal] = guy
else:
# The bounder proposes to an engaged lass!
galslist = galprefers2[gal]
if galslist.index(fiance) > galslist.index(guy):
# She prefers new guy
engaged[gal] = guy
if guyprefers2[fiance]:
# Ex has more girls to try
guysfree.append(fiance)
else:
# She is faithful to old fiance
if guyslist:
# Look again
guysfree.append(guy)
return engaged
print engaged #men,women
|
Well we might have been duped the first time, when we thought we spotted the first Ed Hardy iPad cases. It turned out those where actually not the real deal but these are! Yes they have arrived and admittedly – I think I like the unofficial ones better. Ed Hardy has 4 iPad cases currently for sale and they retail for $35. The designs are the typical Ed Hardy standard. You have the fish, Love Kills Slowly skull, and Tiger tattoo designs. The perfect accessory if you are heading to Miami.
Nice looking cases, but I agree the “knock off” or what ever they were that were spotted earlier looked better.
|
# Simple client used to interact with concurrent servers.
#
# Launches N concurrent client connections, each executing a pre-set sequence of
# sends to the server, and logs what was received back.
#
# Tested with Python 3.6
#
# Eli Bendersky [http://eli.thegreenplace.net]
# This code is in the public domain.
import argparse
import logging
import socket
import sys
import threading
import time
class ReadThread(threading.Thread):
def __init__(self, name, sockobj):
super().__init__()
self.sockobj = sockobj
self.name = name
self.bufsize = 8 * 1024
def run(self):
fullbuf = b''
while True:
buf = self.sockobj.recv(self.bufsize)
logging.info('{0} received {1}'.format(self.name, buf))
fullbuf += buf
if b'1111' in fullbuf:
break
def make_new_connection(name, host, port):
"""Creates a single socket connection to the host:port.
Sets a pre-set sequence of messages to the server with pre-set delays; in
parallel, reads from the socket in a separate thread.
"""
sockobj = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sockobj.connect((host, port))
if sockobj.recv(1) != b'*':
logging.error('Something is wrong! Did not receive *')
logging.info('{0} connected...'.format(name))
rthread = ReadThread(name, sockobj)
rthread.start()
s = b'^abc$de^abte$f'
logging.info('{0} sending {1}'.format(name, s))
sockobj.send(s)
time.sleep(1.0)
s = b'xyz^123'
logging.info('{0} sending {1}'.format(name, s))
sockobj.send(s)
time.sleep(1.0)
# The 0000 sent to the server here will result in an echo of 1111, which is
# a sign for the reading thread to terminate.
# Add WXY after 0000 to enable kill-switch in some servers.
s = b'25$^ab0000$abab'
logging.info('{0} sending {1}'.format(name, s))
sockobj.send(s)
time.sleep(0.2)
rthread.join()
sockobj.close()
logging.info('{0} disconnecting'.format(name))
def main():
argparser = argparse.ArgumentParser('Simple TCP client')
argparser.add_argument('host', help='Server host name')
argparser.add_argument('port', type=int, help='Server port')
argparser.add_argument('-n', '--num_concurrent', type=int,
default=1,
help='Number of concurrent connections')
args = argparser.parse_args()
logging.basicConfig(
level=logging.DEBUG,
format='%(levelname)s:%(asctime)s:%(message)s')
t1 = time.time()
connections = []
for i in range(args.num_concurrent):
name = 'conn{0}'.format(i)
tconn = threading.Thread(target=make_new_connection,
args=(name, args.host, args.port))
tconn.start()
connections.append(tconn)
for conn in connections:
conn.join()
print('Elapsed:', time.time() - t1)
if __name__ == '__main__':
main()
|
The two Real Suites at the Villa Real Hotel 5* are arranged on two convenient levels and each feature their own 15m² private terrace. One has views over Plaza de las Cortes and the centre of Madrid, while the other overlooks the quiet inner courtyard.
The decor of the Real Suite stands out for its elegance, design and character: a rich combination of fine materials, furniture and pieces of priceless art, such as a sculpture of the god Ganesha in meditation from the nineteenth century, geometric mosaic tiles from the thirteenth and fourteenth centuries, an Indian rug and a leather Chesterfield sofa. The rooms combine a classic style and a blend of cultures: a leather-topped desk, chaiselong and Indian carved stone window from the late eighteenth century alongside prints with Egyptian motifs.
Not surprisingly, a stay in one of the Real Suites at the Villa Real Hotel 5* Madrid becomes a unique experience for the senses.
All of the rooms and elegant suites include individual air conditioning, 24-hour room service, 22 satellite TV channels, DVD on request, direct telephone, minibar, safe to fit a laptop, free cot on request, laundry and ironing service, hair dryer, bathroom amenities and bathrobe.
|
# -*- coding: utf-8 -*-
##############################################################################
#
# nxsugarpy, a Python library for building nexus services with python
# Copyright (C) 2016 by the nxsugarpy team
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as published
# by the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Lesser General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import time
wanIps = "1.2.3.4"
lanIps = ["10.0.0.1", "172.16.0.1"]
user = "root"
directory = "/my/dir"
started = time.time()
|
Late 90's Dodge extended cargo van. Shag carpet all around, I mean the floor, sides, and ceiling. Even on the doors. Mattress on the floor at the back. 2 gas cans sitting next to the mattress.
Customer request we install wiring, an inverter, and a receptacle... Login to read more.
|
"""
Tests for course utils.
"""
import ddt
import mock
from django.conf import settings
from openedx.core.djangoapps.content.course_overviews.models import CourseOverview
from util.course import get_link_for_about_page
from xmodule.modulestore import ModuleStoreEnum
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase
from xmodule.modulestore.tests.factories import CourseFactory
@ddt.ddt
class TestCourseSharingLinks(ModuleStoreTestCase):
"""
Tests for course sharing links.
"""
def setUp(self):
super(TestCourseSharingLinks, self).setUp()
# create test mongo course
self.course = CourseFactory.create(
org='test_org',
number='test_number',
run='test_run',
default_store=ModuleStoreEnum.Type.split,
social_sharing_url='test_social_sharing_url',
)
# load this course into course overview and set it's marketing url
self.course_overview = CourseOverview.get_from_id(self.course.id)
self.course_overview.marketing_url = 'test_marketing_url'
self.course_overview.save()
def get_course_sharing_link(self, enable_social_sharing, enable_mktg_site, use_overview=True):
"""
Get course sharing link.
Arguments:
enable_social_sharing(Boolean): To indicate whether social sharing is enabled.
enable_mktg_site(Boolean): A feature flag to decide activation of marketing site.
Keyword Arguments:
use_overview: indicates whether course overview or course descriptor should get
past to get_link_for_about_page.
Returns course sharing url.
"""
mock_settings = {
'FEATURES': {
'ENABLE_MKTG_SITE': enable_mktg_site
},
'SOCIAL_SHARING_SETTINGS': {
'CUSTOM_COURSE_URLS': enable_social_sharing
},
}
with mock.patch.multiple('django.conf.settings', **mock_settings):
course_sharing_link = get_link_for_about_page(
self.course_overview if use_overview else self.course
)
return course_sharing_link
@ddt.data(
(True, True, 'test_social_sharing_url'),
(False, True, 'test_marketing_url'),
(True, False, 'test_social_sharing_url'),
(False, False, '{}/courses/course-v1:test_org+test_number+test_run/about'.format(settings.LMS_ROOT_URL)),
)
@ddt.unpack
def test_sharing_link_with_settings(self, enable_social_sharing, enable_mktg_site, expected_course_sharing_link):
"""
Verify the method gives correct course sharing url on settings manipulations.
"""
actual_course_sharing_link = self.get_course_sharing_link(
enable_social_sharing=enable_social_sharing,
enable_mktg_site=enable_mktg_site,
)
self.assertEqual(actual_course_sharing_link, expected_course_sharing_link)
@ddt.data(
(['social_sharing_url'], 'test_marketing_url'),
(['marketing_url'], 'test_social_sharing_url'),
(
['social_sharing_url', 'marketing_url'],
'{}/courses/course-v1:test_org+test_number+test_run/about'.format(settings.LMS_ROOT_URL)
),
)
@ddt.unpack
def test_sharing_link_with_course_overview_attrs(self, overview_attrs, expected_course_sharing_link):
"""
Verify the method gives correct course sharing url when:
1. Neither marketing url nor social sharing url is set.
2. Either marketing url or social sharing url is set.
"""
for overview_attr in overview_attrs:
setattr(self.course_overview, overview_attr, None)
self.course_overview.save()
actual_course_sharing_link = self.get_course_sharing_link(
enable_social_sharing=True,
enable_mktg_site=True,
)
self.assertEqual(actual_course_sharing_link, expected_course_sharing_link)
@ddt.data(
(True, 'test_social_sharing_url'),
(
False,
'{}/courses/course-v1:test_org+test_number+test_run/about'.format(settings.LMS_ROOT_URL)
),
)
@ddt.unpack
def test_sharing_link_with_course_descriptor(self, enable_social_sharing, expected_course_sharing_link):
"""
Verify the method gives correct course sharing url on passing
course descriptor as a parameter.
"""
actual_course_sharing_link = self.get_course_sharing_link(
enable_social_sharing=enable_social_sharing,
enable_mktg_site=True,
use_overview=False,
)
self.assertEqual(actual_course_sharing_link, expected_course_sharing_link)
|
Dash Cams are the next step in driving security. A Dash Cam is a surveillance-oriented video recorder that is mounted on a vehicle windscreen facing out to the road. It records the events going on around your car onto a Micro SD Card and stores video evidence should incidents occur involving you. Dash Cams automatically power up and record when you turn your engine on and automatically turn off and stop recording when you turn your engine off. Dash Cams are powered through an accessory socket (cigarette lighter socket) via a power cord. Premium Dash Cams can even continue recording while you are parked and away from your vehicle to catch vandals, hit and run offenders and can monitor your garage/property. This parked recording feature is enabled by hard-wiring your Dash Cam to your car battery.
A Dash Cam provides irrefutable video evidence that can protect you against false accusations after an incident, road rage, drunk drivers and insurance fraud (just to mention a few). The video footage can also be used as evidence to assist removing hooligan drivers in your community off the roads, thus protecting everyone. After an accident occurs on our roads, we regularly hear Police Crash Investigators on radio or TV call on the public to send them any Dash Cam footage of the incident. Crash Investigators can use the footage to assist in piecing together what happened in the lead up to an accident.
Today, Dash Cams are used by the majority of drivers in America, Europe, South Korea, Russia, Southeast Asia, and is rapidly growing in popularity in South Africa. Car Dash Cams South Africa is the leader in the supply of high quality Dash Cams in South Africa.
Do I need a Dash Cam? And is it worth paying R1500+ for a Dash Cam?
Dash Cams will provide irrefutable video evidence in the event of an incident and protect you from false accusations. You may not be able to find a witness after an incident. The video evidence can clearly identify who was at fault and what exactly happened.
Cheap Dash Cams can have missing video files when you go to review the footage (or not even record at all when you needed it to).
Cheap Dash Cams might not function or record during our hot Australian Summers or during colder weather. Cheap Dash Cams are prone to freezing, restarting and malfunctioning in hot/cold conditions.
Cheaper Dash Cam manufacturers and sellers might not provide timely or honest warranty services. You could also be buying a fake, re-branded or refurbished Dash Cam.
Buying a higher-end, quality Dash Cam from a trusted supplier provides reliability, performance and the confidence that it will record under all driving conditions.
What is the Benefits of Installing a Dash Camera?
Dash Cams help safe drivers protect themselves from false accusations in road incidents. We understand that no matter how careful you drive, if other drivers are not paying attention, you can easily end up in an accident. You may be unfairly put at fault by the insurance company depending on the situation. With the Dash Cams we supply, safe drivers can record the entire incident scene and ensure the correct party takes the responsibility.
Dash Cam footage has been used successfully in traffic cases overseas to prove what actually occurred in an incident; for example, to show that the other driver ran a red light. In a case where it would be your word against the other driver’s, a video recording can play an important role in protecting you from false accusations (that is if you are not at fault). Police vehicles are commonly equipped with Dash Cams for their benefits and use. Dash Cams have prevented some South African drivers from paying fines, or assisted with their insurance claims, but we aren’t aware of any South African court cases where Dash Cam footage has been used in evidence.
Using a Dash Cam is legal, provided it is not manually operated while driving and is mounted securely. As with any sat nav or other dashboard/windshield mounted device, a Dash Cam should be in a fixed mount and must not obscure your view of the road ahead, behind or to either side of the vehicle. Recording video while driving on public roads is OK as privacy concerns don’t generally apply in public spaces, but think carefully about how you use the videos. There should be no problem showing Dash Cam videos to the police or insurers, but it might not always be OK to post them online.
There is huge potential for insurance companies to cooperate with the South African public and deduct insurance premiums for Dash Cam use, as Dash Cams help to identify the at-fault party. If an incident occurs, having a Dash Cam can ensure that every detail, like speed of your vehicle, the angle of the crash, and surrounding driving conditions will be available.
With this information, insurance companies can investigate the cause and fault of the incident more efficiently and timely. This could lower the cost of premiums and reduce the time it takes to settle claims. Providing video evidence of an incident that was not your fault can assist in avoiding paying your excess and protect your no claim history.
Most of our Dash Cams can protect our customer’s cars from being vandalized while parked. With the use of a hardwired kit to provide constant power, the Dash Cam can be triggered by motion or impact and record what is happening around your parked car. We all know what shopping centre/supermarket car parks are like. In the majority of cases, the negligent driver will leave the scene without leaving any details. The recorded footage can assist to prove to your insurance company that you did not cause the damage yourself, or in the case of deliberate damage the footage can be shown to Police.
|
import logging
from django.contrib.sites.models import Site
from django.http import (HttpResponseBadRequest, HttpResponseForbidden,
HttpResponseRedirect)
from django.shortcuts import get_object_or_404
from django.template.loader import render_to_string
from django.urls import reverse
from django.views.generic import DeleteView, UpdateView
from django.views.generic.edit import FormView
from rdmo.accounts.utils import is_site_manager
from rdmo.core.mail import send_mail
from rdmo.core.views import ObjectPermissionMixin, RedirectViewMixin
from ..forms import MembershipCreateForm
from ..models import Membership, Project
from ..utils import is_last_owner
logger = logging.getLogger(__name__)
class MembershipCreateView(ObjectPermissionMixin, RedirectViewMixin, FormView):
model = Membership
form_class = MembershipCreateForm
permission_required = 'projects.add_membership_object'
template_name = 'projects/membership_form.html'
def dispatch(self, *args, **kwargs):
self.project = get_object_or_404(Project.objects.all(), pk=self.kwargs['project_id'])
return super().dispatch(*args, **kwargs)
def get_queryset(self):
return Membership.objects.filter(project=self.project)
def get_permission_object(self):
return self.project
def get_form_kwargs(self):
kwargs = super().get_form_kwargs()
kwargs['project'] = self.project
kwargs['is_site_manager'] = is_site_manager(self.request.user)
return kwargs
def get_success_url(self):
return self.project.get_absolute_url()
def form_valid(self, form):
invite = form.save()
if invite is not None:
context = {
'invite_url': self.request.build_absolute_uri(reverse('project_join', args=[invite.token])),
'invite_user': invite.user,
'project': invite.project,
'user': self.request.user,
'site': Site.objects.get_current()
}
subject = render_to_string('projects/email/project_invite_subject.txt', context)
message = render_to_string('projects/email/project_invite_message.txt', context)
# send the email
send_mail(subject, message, to=[invite.email])
return super().form_valid(form)
class MembershipUpdateView(ObjectPermissionMixin, RedirectViewMixin, UpdateView):
fields = ('role', )
permission_required = 'projects.change_membership_object'
def get_queryset(self):
return Membership.objects.filter(project_id=self.kwargs.get('project_id'))
def get_permission_object(self):
return self.get_object().project
class MembershipDeleteView(ObjectPermissionMixin, RedirectViewMixin, DeleteView):
permission_required = 'projects.delete_membership_object'
def get_queryset(self):
return Membership.objects.filter(project_id=self.kwargs.get('project_id'))
def delete(self, *args, **kwargs):
self.obj = self.get_object()
if (self.request.user in self.obj.project.owners) or is_site_manager(self.request.user):
# user is owner or site manager
if is_last_owner(self.obj.project, self.obj.user):
logger.info('User "%s" not allowed to remove last user "%s"', self.request.user.username, self.obj.user.username)
return HttpResponseBadRequest()
else:
logger.info('User "%s" deletes user "%s"', self.request.user.username, self.obj.user.username)
success_url = reverse('project', args=[self.get_object().project.id])
self.obj.delete()
return HttpResponseRedirect(success_url)
elif self.request.user == self.obj.user:
# user wants to remove him/herself
logger.info('User "%s" deletes himself.', self.request.user.username)
success_url = reverse('projects')
self.obj.delete()
return HttpResponseRedirect(success_url)
else:
logger.info('User "%s" not allowed to remove user "%s"', self.request.user.username, self.obj.user.username)
return HttpResponseForbidden()
def get_permission_object(self):
return self.get_object().project
|
It is our company’s highest priority to provide our customers with Total Quality Service all the time.
Restaurant Hood Cleaning and Exhaust System Degreasing Specialists!
Our Hood Cleaning Complies with NFPA Code 96 – See Restaurant Info For Details!
Your First Choice in Siding Cleaning!
Our specially designed biodegradable chemicals are highly effective, and best of all, won’t harm the environment or vegetation.
Hot and Cold Water Systems from 1,000 P.S.I. to 10,000 P.S.I.
We can customize a service plan so you can have worry free, scheduled cleanings!
|
# ============================================================================
#
# Copyright (C) 2007-2012 Conceptive Engineering bvba. All rights reserved.
# www.conceptive.be / project-camelot@conceptive.be
#
# This file is part of the Camelot Library.
#
# This file may be used under the terms of the GNU General Public
# License version 2.0 as published by the Free Software Foundation
# and appearing in the file license.txt included in the packaging of
# this file. Please review this information to ensure GNU
# General Public Licensing requirements will be met.
#
# If you are unsure which license is appropriate for your use, please
# visit www.python-camelot.com or contact project-camelot@conceptive.be
#
# This file is provided AS IS with NO WARRANTY OF ANY KIND, INCLUDING THE
# WARRANTY OF DESIGN, MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE.
#
# For use of this library in commercial applications, please contact
# project-camelot@conceptive.be
#
# ============================================================================
"""form view"""
import functools
import logging
LOGGER = logging.getLogger('camelot.view.controls.formview')
from PyQt4 import QtGui
from PyQt4 import QtCore
from PyQt4.QtCore import Qt
from camelot.admin.action.application_action import Refresh
from camelot.admin.action.form_action import FormActionGuiContext
from camelot.view.model_thread import post
from camelot.view.controls.view import AbstractView
from camelot.view.controls.busy_widget import BusyWidget
from camelot.view import register
class FormEditors( object ):
"""A class that holds the editors used on a form
"""
option = None
bold_font = None
def __init__( self, columns, widget_mapper, delegate, admin ):
if self.option == None:
self.option = QtGui.QStyleOptionViewItem()
# set version to 5 to indicate the widget will appear on a
# a form view and not on a table view
self.option.version = 5
self.bold_font = QtGui.QApplication.font()
self.bold_font.setBold(True)
self._admin = admin
self._widget_mapper = widget_mapper
self._field_attributes = dict()
self._index = dict()
for i, (field_name, field_attributes ) in enumerate( columns):
self._field_attributes[field_name] = field_attributes
self._index[field_name] = i
def create_editor( self, field_name, parent ):
"""
:return: a :class:`QtGuiQWidget` or None if field_name is unknown
"""
index = self._index[field_name]
model = self._widget_mapper.model()
delegate = self._widget_mapper.itemDelegate()
model_index = model.index( self._widget_mapper.currentIndex(), index )
widget_editor = delegate.createEditor(
parent,
self.option,
model_index
)
widget_editor.setObjectName('%s_editor'%field_name)
delegate.setEditorData( widget_editor, model_index )
self._widget_mapper.addMapping( widget_editor, index )
return widget_editor
def create_label( self, field_name, editor, parent ):
from camelot.view.controls.field_label import FieldLabel
from camelot.view.controls.editors.wideeditor import WideEditor
field_attributes = self._field_attributes[field_name]
hide_title = field_attributes.get( 'hide_title', False )
widget_label = None
if not hide_title:
widget_label = FieldLabel(
field_name,
field_attributes['name'],
field_attributes,
self._admin
)
widget_label.setObjectName('%s_label'%field_name)
if not isinstance(editor, WideEditor):
widget_label.setAlignment(Qt.AlignVCenter | Qt.AlignRight)
# required fields font is bold
nullable = field_attributes.get( 'nullable', True )
if not nullable:
widget_label.setFont( self.bold_font )
return widget_label
class FormWidget(QtGui.QWidget):
"""A form widget comes inside a form view"""
changed_signal = QtCore.pyqtSignal( int )
def __init__(self, parent, admin):
QtGui.QWidget.__init__(self, parent)
self._admin = admin
widget_mapper = QtGui.QDataWidgetMapper(self)
widget_mapper.setObjectName('widget_mapper')
if self._admin.get_save_mode()=='on_leave':
widget_mapper.setSubmitPolicy(QtGui.QDataWidgetMapper.ManualSubmit)
widget_layout = QtGui.QHBoxLayout()
widget_layout.setSpacing(0)
widget_layout.setContentsMargins(0, 0, 0, 0)
self._index = 0
self._model = None
self._form = None
self._columns = None
self._delegate = None
self.setLayout(widget_layout)
def get_model(self):
return self._model
def set_model(self, model):
self._model = model
self._model.dataChanged.connect( self._data_changed )
self._model.layoutChanged.connect( self._layout_changed )
self._model.item_delegate_changed_signal.connect( self._item_delegate_changed )
self._model.setObjectName( 'model' )
widget_mapper = self.findChild(QtGui.QDataWidgetMapper, 'widget_mapper' )
if widget_mapper:
widget_mapper.setModel( model )
register.register( model, widget_mapper )
def get_columns_and_form():
return (self._model.getColumns(), self._admin.get_form_display())
post(get_columns_and_form, self._set_columns_and_form)
def clear_mapping(self):
widget_mapper = self.findChild(QtGui.QDataWidgetMapper, 'widget_mapper' )
if widget_mapper:
widget_mapper.clearMapping()
@QtCore.pyqtSlot( QtCore.QModelIndex, QtCore.QModelIndex )
def _data_changed(self, index_from, index_to):
#@TODO: only revert if this form is in the changed range
widget_mapper = self.findChild(QtGui.QDataWidgetMapper, 'widget_mapper' )
if widget_mapper:
widget_mapper.revert()
self.changed_signal.emit( widget_mapper.currentIndex() )
@QtCore.pyqtSlot()
def _layout_changed(self):
widget_mapper = self.findChild(QtGui.QDataWidgetMapper, 'widget_mapper' )
if widget_mapper:
widget_mapper.revert()
self.changed_signal.emit( widget_mapper.currentIndex() )
@QtCore.pyqtSlot()
def _item_delegate_changed(self):
from camelot.view.controls.delegates.delegatemanager import \
DelegateManager
self._delegate = self._model.getItemDelegate()
self._delegate.setObjectName('delegate')
assert self._delegate != None
assert isinstance(self._delegate, DelegateManager)
self._create_widgets()
@QtCore.pyqtSlot(int)
def current_index_changed( self, index ):
self.changed_signal.emit( index )
def set_index(self, index):
self._index = index
widget_mapper = self.findChild(QtGui.QDataWidgetMapper, 'widget_mapper' )
if widget_mapper:
widget_mapper.setCurrentIndex(self._index)
def get_index(self):
widget_mapper = self.findChild(QtGui.QDataWidgetMapper, 'widget_mapper' )
if widget_mapper:
return widget_mapper.currentIndex()
def submit(self):
widget_mapper = self.findChild(QtGui.QDataWidgetMapper, 'widget_mapper' )
if widget_mapper:
widget_mapper.submit()
@QtCore.pyqtSlot(tuple)
def _set_columns_and_form(self, columns_and_form ):
self._columns, self._form = columns_and_form
self._create_widgets()
def _create_widgets(self):
"""Create value and label widgets"""
#
# Dirty trick to make form views work during unit tests, since unit
# tests have no event loop running, so the delegate will never be set,
# so we get it and are sure it will be there if we are running without
# threads
#
if not self._delegate:
self._delegate = self._model.getItemDelegate()
#
# end of dirty trick
#
# only if all information is available, we can start building the form
if not (self._form and self._columns and self._delegate):
return
widgets = {}
widget_mapper = self.findChild(QtGui.QDataWidgetMapper, 'widget_mapper' )
if not widget_mapper:
return
LOGGER.debug( 'begin creating widgets' )
widget_mapper.setItemDelegate(self._delegate)
widgets = FormEditors( self._columns, widget_mapper, self._delegate, self._admin )
widget_mapper.setCurrentIndex( self._index )
LOGGER.debug( 'put widgets on form' )
self.layout().insertWidget(0, self._form.render( widgets, self, True) )
LOGGER.debug( 'done' )
#self._widget_layout.setContentsMargins(7, 7, 7, 7)
class FormView(AbstractView):
"""A FormView is the combination of a FormWidget, possible actions and menu
items
.. form_widget: The class to be used as a the form widget inside the form
view"""
form_widget = FormWidget
def __init__(self, title, admin, model, index, parent = None):
AbstractView.__init__( self, parent )
layout = QtGui.QVBoxLayout()
layout.setSpacing( 1 )
layout.setContentsMargins( 1, 1, 1, 1 )
layout.setObjectName( 'layout' )
form_and_actions_layout = QtGui.QHBoxLayout()
form_and_actions_layout.setObjectName('form_and_actions_layout')
layout.addLayout( form_and_actions_layout )
self.model = model
self.admin = admin
self.title_prefix = title
self.refresh_action = Refresh()
form = FormWidget(self, admin)
form.setObjectName( 'form' )
form.changed_signal.connect( self.update_title )
form.set_model(model)
form.set_index(index)
form_and_actions_layout.addWidget(form)
self.gui_context = FormActionGuiContext()
self.gui_context.workspace = self
self.gui_context.admin = admin
self.gui_context.view = self
self.gui_context.widget_mapper = self.findChild( QtGui.QDataWidgetMapper,
'widget_mapper' )
self.setLayout( layout )
self.change_title(title)
if hasattr(admin, 'form_size') and admin.form_size:
self.setMinimumSize(admin.form_size[0], admin.form_size[1])
self.accept_close_event = False
get_actions = admin.get_form_actions
post( functools.update_wrapper( functools.partial( get_actions,
None ),
get_actions ),
self.set_actions )
get_toolbar_actions = admin.get_form_toolbar_actions
post( functools.update_wrapper( functools.partial( get_toolbar_actions,
Qt.TopToolBarArea ),
get_toolbar_actions ),
self.set_toolbar_actions )
@QtCore.pyqtSlot()
def refresh(self):
"""Refresh the data in the current view"""
self.model.refresh()
def _get_title( self, index ):
obj = self.model._get_object( index )
return u'%s %s' % (
self.title_prefix,
self.admin.get_verbose_identifier(obj)
)
@QtCore.pyqtSlot( int )
def update_title(self, current_index ):
post( self._get_title, self.change_title, args=(current_index,) )
@QtCore.pyqtSlot(list)
def set_actions(self, actions):
form = self.findChild(QtGui.QWidget, 'form' )
layout = self.findChild(QtGui.QLayout, 'form_and_actions_layout' )
if actions and form and layout:
side_panel_layout = QtGui.QVBoxLayout()
from camelot.view.controls.actionsbox import ActionsBox
LOGGER.debug('setting Actions for formview')
actions_widget = ActionsBox( parent = self,
gui_context = self.gui_context )
actions_widget.setObjectName('actions')
actions_widget.set_actions( actions )
side_panel_layout.addWidget( actions_widget )
side_panel_layout.addStretch()
layout.addLayout( side_panel_layout )
@QtCore.pyqtSlot(list)
def set_toolbar_actions(self, actions):
layout = self.findChild( QtGui.QLayout, 'layout' )
if layout and actions:
toolbar = QtGui.QToolBar()
for action in actions:
qaction = action.render( self.gui_context, toolbar )
qaction.triggered.connect( self.action_triggered )
toolbar.addAction( qaction )
toolbar.addWidget( BusyWidget() )
layout.insertWidget( 0, toolbar, 0, Qt.AlignTop )
# @todo : this show is needed on OSX or the form window
# is hidden after the toolbar is added, maybe this can
# be solved using windowflags, since this causes some
# flicker
self.show()
@QtCore.pyqtSlot( bool )
def action_triggered( self, _checked = False ):
action_action = self.sender()
action_action.action.gui_run( self.gui_context )
@QtCore.pyqtSlot()
def validate_close( self ):
self.admin.form_close_action.gui_run( self.gui_context )
def close_view( self, accept ):
self.accept_close_event = accept
if accept == True:
# clear mapping to prevent data being written again to the model,
# when the underlying object would be reverted
form = self.findChild( QtGui.QWidget, 'form' )
if form != None:
form.clear_mapping()
self.close()
def closeEvent(self, event):
if self.accept_close_event == True:
event.accept()
else:
# make sure the next closeEvent is sent after this one
# is processed
QtCore.QTimer.singleShot( 0, self.validate_close )
event.ignore()
|
From one of the world’s leading data scientists, a landmark tour of the new science of idea flow, offering revolutionary insights into the mysteries of collective intelligence and social influence. If the Big Data revolution has a presiding genius, it is MIT’s Alex ´´Sandy” Pentland. Over years of groundbreaking experiments, he has distilled remarkable discoveries significant enough to become the bedrock of a whole new scientific field: social physics. Humans have more in common with bees than we like to admit: We’re social creatures first and foremost. Our most important habits of action - and most basic notions of common sense - are wired into us through our coordination in social groups. Social physics is about idea flow, the way human social networks spread ideas and transform those ideas into behaviors. Thanks to the millions of digital bread crumbs people leave behind via smartphones, GPS devices, and the Internet, the amount of new information we have about human activity is truly profound. Until now, sociologists have depended on limited data sets and surveys that tell us how people say they think and behave, rather than what they actually do. As a result, we’ve been stuck with the same stale social structures - classes, markets - and a focus on individual actors, data snapshots, and steady states. Pentland shows that, in fact, humans respond much more powerfully to social incentives that involve rewarding others and strengthening the ties that bind than incentives that involve only their own economic self-interest. Pentland and his teams have found that they can study patterns of information exchange in a social network without any knowledge of the actual content of the information and predict with stunning accuracy how productive and effective that network is, whether it’s a business or an entire city. We can maximize a group’s collective intelligence to improve performance and use social incentives to crea 1. Language: English. Narrator: Robert Petkoff. Audio sample: http://samples.audible.de/bk/peng/002286/bk_peng_002286_sample.mp3. Digital audiobook in aax.
|
import os
from pact_test.config.config_builder import Config
def test_default_consumer_tests_path():
config = Config()
assert config.consumer_tests_path == 'tests/service_consumers'
def test_default_provider_tests_path():
config = Config()
assert config.provider_tests_path == 'tests/service_providers'
def test_default_pact_broker_uri():
config = Config()
assert config.pact_broker_uri == 'http://localhost:9292/'
def test_custom_consumer_tests_path():
class TestConfig(Config):
def path_to_user_config_file(self):
return os.path.join(os.getcwd(), 'tests',
'resources', 'config',
'consumer_only.json')
config = TestConfig()
assert config.pact_broker_uri == 'http://localhost:9292/'
assert config.consumer_tests_path == 'mypath/mytests'
assert config.provider_tests_path == 'tests/service_providers'
def test_custom_provider_tests_path():
class TestConfig(Config):
def path_to_user_config_file(self):
return os.path.join(os.getcwd(), 'tests',
'resources', 'config',
'provider_only.json')
config = TestConfig()
assert config.pact_broker_uri == 'http://localhost:9292/'
assert config.provider_tests_path == 'mypath/mytests'
assert config.consumer_tests_path == 'tests/service_consumers'
def test_custom_pact_broker_uri():
class TestConfig(Config):
def path_to_user_config_file(self):
return os.path.join(os.getcwd(), 'tests',
'resources', 'config',
'pact_broker_only.json')
config = TestConfig()
assert config.pact_broker_uri == 'mypath/mytests'
assert config.provider_tests_path == 'tests/service_providers'
assert config.consumer_tests_path == 'tests/service_consumers'
def test_user_settings():
class TestConfig(Config):
def path_to_user_config_file(self):
return os.path.join(os.getcwd(), 'tests',
'resources', 'config',
'.pact.json')
config = TestConfig()
assert config.pact_broker_uri == 'mypath/mybroker'
assert config.provider_tests_path == 'mypath/myprovider'
assert config.consumer_tests_path == 'mypath/myconsumer'
|
This is one of the most remarkable cases that I have ever read. It is important because it greatly increases the range of creations in which copyright can subsist. I first thought it was a bit of a joke because that is how it was presented in the press (see Food taste 'not protected by copyright' rules EU court 13 Nov 2018 BBC website). Now I see its potential for all the creative industries.
The owner of the intellectual property rights in a spreadable dip called Heks'nkass sued Smilde Foods for infringing its copyright in the taste of its tip - yep, that's right taste - by making and selling a product called Witte Wievenkaas in the Gelderbank District Court (Rechtbank Gelderland). That court threw out the claim on the ground that "it was not necessary to rule on whether the taste of Heksenkaas was protectable under copyright law, given that Levola’s claims had, in any event, to be rejected since it had not indicated which elements, or combination of elements, of the taste of Heksenkaas gave it its unique, original character and personal stamp."
The claimant appealed to the Court of Appeal for Arnhem and Leeuwarden (Gerechtshof Arnhem-Leeuwarden) which considered that the key issue in this case was whether the taste of a food product was eligible for copyright protection, The reason that the appeal court thought that it might is that the Dutch Supreme Court had previously stated that copyright might subsist in the scent of a perfume in Kekova BV v Lancôme Parfums et Beauté & Cie SNC NL:HR:2006: AU8940. That was contrary to the decision of the French Cour de Cassation (see Legal News Fragrance d'un parfum: contrefaçon et concurrence déloyale 3 Jan 2014).
The defendant challenged the admissibility of the proceedings on the ground that the action should be dismissed because the claimant had not identified the elements of Heks'nkaas that were alleged to be the author's intellectual creation. The Court of Justice refused to entertain that point as it was exclusively a matter for the referring court. It observed that it was entitled to refuse to consider references where the point of law bore no relationship to the facts or where the issue was entirely hypothetical but neither was not the case here. There was no reason why it should not consider the questions that had been referred to it and it proceeded to do so.
The Court reformulated the first question as to "whether Directive 2001/29 must be interpreted as precluding (i) the taste of a food product from being protected by copyright under that directive and (ii) national legislation from being interpreted in such a way that it grants copyright protection to such a taste."
It held that the taste of such a product could be protected by copyright under that directive only if that taste can be classified as a ‘work’ within the meaning of the directive and in that regard, two cumulative conditions had to be satisfied in order for the subject matter to be classified as a ‘work’. First, the subject matter concerned must be original in the sense that it is the author’s own intellectual creation (Cases C-403/08 and C-429/08, Football Association Premier League and Others v QX Leisure and Others ECLI:EU:C:2011:631, FSR 1, 1 CMLR 29, All ER (EC) 629, EU:C:2011:631, EUECJ C-403/08, ECR I-9083, ECDR 8, Bus LR 1321, CEC 242). Secondly, only something which is the expression of the author’s own intellectual creation may be classified as a ‘work’ within the meaning of the directive 2001/29 (Case C5/08 Infopaq International A/S v Danske Dagblades Forening and Football Association Premier League).
Although the European Union is not a party to the Berne Convention, it is nevertheless obliged (under art 1 (4) of the WIPO Copyright Treaty, to which it is a party and which Directive 2001/29 is intended to implement) to comply with arts 1 to 21 of the Berne Convention, Under art 2 (1) of that convention, "literary and artistic works" include every production in the literary, scientific and artistic domain, whatever the mode or form of its expression may be. Moreover, in accordance with art 2 of the WIPO Copyright Treaty and art 9 (2) of TRIPS, copyright protection may be granted to expressions, but not to ideas, procedures, methods of operation or mathematical concepts as such. Accordingly, for there to be a ‘work’ as referred to in Directive 2001/29, the subject matter protected by copyright must be expressed in a manner which makes it identifiable with sufficient precision and objectivity, even though that expression is not necessarily in permanent form.
That is because the authorities responsible for ensuring that the exclusive rights inherent in copyright are protected must be able to identify, clearly and precisely, the subject matter so protected. The same is true for individuals, in particular, economic operators, who must be able to identify, clearly and precisely, what is the subject matter of protection which third parties, especially competitors, enjoy. Secondly, the need to ensure that there is no element of subjectivity -- given that it is detrimental to legal certainty -- in the process of identifying the protected subject matter means that the latter must be capable of being expressed in a precise and objective manner.
The taste of a food product cannot, however, be pinned down with precision and objectivity. Unlike, for example, a literary, pictorial, cinematographic or musical work, which is a precise and objective form of expression, the taste of a food product will be identified essentially on the basis of taste sensations and experiences, which are subjective and variable since they depend, inter alia, on factors particular to the person tasting the product concerned, such as age, food preferences and consumption habits, as well as on the environment or context in which the product is consumed. Moreover, it is not yet possible in the current state of scientific development to achieve by technical means a precise and objective identification of the taste of a food product which enables it to be distinguished from the taste of other products of the same kind.
It must, therefore, be concluded, on the basis of all of the foregoing considerations, that the taste of a food product cannot be classified as a ‘work’ within the meaning of Directive 2001/29. In view of the need for a uniform interpretation of the concept of a "work" throughout the EU, it must follow that Directive 2001/29 prevents national legislation from being interpreted in such a way that it grants copyright protection to the taste of a food product.
The answer to the first question is that Directive 2001/29 must be interpreted as precluding (i) the taste of a food product from being protected by copyright under that directive and (ii) national legislation from being interpreted in such a way that it grants copyright protection to such a taste.
In view of its decision on the first question, the Court did not need to consider the second.
The answer to the national court's question was: "Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society must be interpreted as precluding (i) the taste of a food product from being protected by copyright under that directive and (ii) national legislation from being interpreted in such a way that it grants copyright protection to such a taste."
(c) the typographical arrangement of published editions."
S.1 (2) adds for good measure that "copyright work" means a work of any of those descriptions in which copyright subsists.
What is remarkable about this judgment is that it greatly expands the creations in which copyright can subsist. It can include any production in the literary, scientific and artistic domain, whatever the mode or form of its expression may be so long as they are more than ideas, procedures, methods of operation or mathematical concepts as such and they are expressions of their authors' intellectual creation. The claimant only failed because taste is subjective. Should there ever be an objective means of identifying taste there would be no reason why flavour and fragrance could not be protected by copyright.
Anyone wishing to discuss this case or copyright in general may call me on +44 (0)20 7404 5252 during office hours or send me a message through my contact page.
|
# -*- coding: utf-8 -*-
from cms.exceptions import DuplicatePlaceholderWarning
from cms.models import Page
from cms.templatetags.cms_tags import Placeholder
from cms.utils.placeholder import validate_placeholder_name
from django.contrib.sites.models import Site, SITE_CACHE
from django.shortcuts import get_object_or_404
from django.template import NodeList, VariableNode, TemplateSyntaxError
from django.template.loader import get_template
from django.template.loader_tags import ConstantIncludeNode, ExtendsNode, BlockNode
import warnings
from sekizai.helpers import is_variable_extend_node
def get_page_from_plugin_or_404(cms_plugin):
return get_object_or_404(Page, placeholders=cms_plugin.placeholder)
def _extend_blocks(extend_node, blocks):
"""
Extends the dictionary `blocks` with *new* blocks in the parent node (recursive)
"""
# we don't support variable extensions
if is_variable_extend_node(extend_node):
return
parent = extend_node.get_parent(None)
# Search for new blocks
for node in parent.nodelist.get_nodes_by_type(BlockNode):
if not node.name in blocks:
blocks[node.name] = node
else:
# set this node as the super node (for {{ block.super }})
block = blocks[node.name]
seen_supers = []
while hasattr(block.super, 'nodelist') and block.super not in seen_supers:
seen_supers.append(block.super)
block = block.super
block.super = node
# search for further ExtendsNodes
for node in parent.nodelist.get_nodes_by_type(ExtendsNode):
_extend_blocks(node, blocks)
break
def _find_topmost_template(extend_node):
parent_template = extend_node.get_parent({})
for node in parent_template.nodelist.get_nodes_by_type(ExtendsNode):
# Their can only be one extend block in a template, otherwise django raises an exception
return _find_topmost_template(node)
# No ExtendsNode
return extend_node.get_parent({})
def _extend_nodelist(extend_node):
"""
Returns a list of placeholders found in the parent template(s) of this
ExtendsNode
"""
# we don't support variable extensions
if is_variable_extend_node(extend_node):
return []
# This is a dictionary mapping all BlockNode instances found in the template that contains extend_node
blocks = extend_node.blocks
_extend_blocks(extend_node, blocks)
placeholders = []
for block in blocks.values():
placeholders += _scan_placeholders(block.nodelist, block, blocks.keys())
# Scan topmost template for placeholder outside of blocks
parent_template = _find_topmost_template(extend_node)
placeholders += _scan_placeholders(parent_template.nodelist, None, blocks.keys())
return placeholders
def _scan_placeholders(nodelist, current_block=None, ignore_blocks=None):
placeholders = []
if ignore_blocks is None:
# List of BlockNode instances to ignore.
# This is important to avoid processing overriden block nodes.
ignore_blocks = []
for node in nodelist:
# check if this is a placeholder first
if isinstance(node, Placeholder):
placeholders.append(node.get_name())
# if it's a Constant Include Node ({% include "template_name.html" %})
# scan the child template
elif isinstance(node, ConstantIncludeNode):
# if there's an error in the to-be-included template, node.template becomes None
if node.template:
placeholders += _scan_placeholders(node.template.nodelist, current_block)
# handle {% extends ... %} tags
elif isinstance(node, ExtendsNode):
placeholders += _extend_nodelist(node)
# in block nodes we have to scan for super blocks
elif isinstance(node, VariableNode) and current_block:
if node.filter_expression.token == 'block.super':
if not hasattr(current_block.super, 'nodelist'):
raise TemplateSyntaxError("Cannot render block.super for blocks without a parent.")
placeholders += _scan_placeholders(current_block.super.nodelist, current_block.super)
# ignore nested blocks which are already handled
elif isinstance(node, BlockNode) and node.name in ignore_blocks:
continue
# if the node has the newly introduced 'child_nodelists' attribute, scan
# those attributes for nodelists and recurse them
elif hasattr(node, 'child_nodelists'):
for nodelist_name in node.child_nodelists:
if hasattr(node, nodelist_name):
subnodelist = getattr(node, nodelist_name)
if isinstance(subnodelist, NodeList):
if isinstance(node, BlockNode):
current_block = node
placeholders += _scan_placeholders(subnodelist, current_block, ignore_blocks)
# else just scan the node for nodelist instance attributes
else:
for attr in dir(node):
obj = getattr(node, attr)
if isinstance(obj, NodeList):
if isinstance(node, BlockNode):
current_block = node
placeholders += _scan_placeholders(obj, current_block, ignore_blocks)
return placeholders
def get_placeholders(template):
compiled_template = get_template(template)
placeholders = _scan_placeholders(compiled_template.nodelist)
clean_placeholders = []
for placeholder in placeholders:
if placeholder in clean_placeholders:
warnings.warn("Duplicate {{% placeholder \"{0}\" %}} "
"in template {1}."
.format(placeholder, template, placeholder),
DuplicatePlaceholderWarning)
else:
validate_placeholder_name(placeholder)
clean_placeholders.append(placeholder)
return clean_placeholders
SITE_VAR = "site__exact"
def current_site(request):
if SITE_VAR in request.REQUEST:
site_pk = request.REQUEST[SITE_VAR]
else:
session = getattr(request, 'session')
site_pk = request.session.get('cms_admin_site', None)
if site_pk:
try:
site = SITE_CACHE.get(site_pk) or Site.objects.get(pk=site_pk)
SITE_CACHE[site_pk] = site
return site
except Site.DoesNotExist:
return None
else:
return Site.objects.get_current()
|
Forecasting the effects of climate change on nitrogen (N) cycling in pastures requires an understanding of changes in tissue N. A new study published in AoB PLANTS by Volder et al. shows that elevated CO2, climate warming, and management impact shoot and root nitrogen concentrations in different ways in managed pastures. Management (clipping frequency) had the strongest impact on aboveground tissue N concentrations, while the impact of the climate change drivers on shoot N concentration was interactive and varied seasonally. Green leaf N concentrations were decreased by elevated CO2 and increased by more frequent clipping. Both warming treatments increased leaf N concentrations under ambient CO2 concentrations, but did not significantly alter leaf N concentrations under elevated CO2 concentrations. Fine root N concentrations were mostly unaffected by the treatments, although elevated CO2 decreased root N concentration in deeper soil layers. The interactive nature of the climate change drivers through time, as well as the fact that root N concentration response to the treatments was entirely different from aboveground responses, highlights the complexity in predicting plant N nutrition responses to projected climate change.
|
"""Leetcode 389. Find the Difference
Easy
URL: https://leetcode.com/problems/find-the-difference/
Given two strings s and t which consist of only lowercase letters.
String t is generated by random shuffling string s and then
add one more letter at a random position.
Find the letter that was added in t.
Example:
Input:
s = "abcd"
t = "abcde"
Output:
e
Explanation:
'e' is the letter that was added.
"""
class SolutionSortIter(object):
def findTheDifference(self, s, t):
"""
:type s: str
:type t: str
:rtype: str
Time complexity: O(n*logn).
Space complexity: O(n).
"""
# Sort s & t.
s_ls = list(s)
t_ls = list(t)
s_ls.sort()
t_ls.sort()
# Iterate through s's chars to check mismatch.
for i, c in enumerate(s_ls):
if c != t_ls[i]:
return t_ls[i]
# If no mismatch, then the t's last char is the diff one.
return t_ls[-1]
class SolutionCharCountDict(object):
def findTheDifference(self, s, t):
"""
:type s: str
:type t: str
:rtype: str
Time complexity: O(n).
Space complexity: O(n).
"""
from collections import defaultdict
char_count_d = defaultdict(int)
# Iterate through s's chars and increment counter.
for c in s:
char_count_d[c] += 1
# Iterate through t's chars.
for c in t:
if not char_count_d[c]:
# If c is not in s, c is additional char.
return c
else:
# If c is in s, decrement its counter.
char_count_d[c] -= 1
class SolutionOrdSumDiff(object):
def findTheDifference(self, s, t):
"""
:type s: str
:type t: str
:rtype: str
Time complexity: O(n).
Space complexity: O(1).
"""
ord_sum_diff = 0
# Decrement ord_sum_diff by s's char ordinal.
for c in s:
ord_sum_diff -= ord(c)
# Increment by t's char ordinal.
for c in t:
ord_sum_diff += ord(c)
return chr(ord_sum_diff)
class SolutionXOR(object):
def findTheDifference(self, s, t):
"""
:type s: str
:type t: str
:rtype: str
Time complexity: O(n).
Space complexity: O(1).
"""
xor = 0
# XOR by s's char ord.
for c in s:
xor ^= ord(c)
# XOR by t's char ord.
for c in t:
xor ^= ord(c)
return chr(xor)
def main():
# Output: e
s = "abcd"
t = "abcde"
print SolutionSortIter().findTheDifference(s, t)
print SolutionCharCountDict().findTheDifference(s, t)
print SolutionOrdSumDiff().findTheDifference(s, t)
print SolutionXOR().findTheDifference(s, t)
# Output: a
s = ""
t = "a"
print SolutionSortIter().findTheDifference(s, t)
print SolutionCharCountDict().findTheDifference(s, t)
print SolutionOrdSumDiff().findTheDifference(s, t)
print SolutionXOR().findTheDifference(s, t)
if __name__ == '__main__':
main()
|
For Internet speeds at affordable prices near Bountiful, Utah, the choice is CenturyLink High-Speed Internet. When you sign up for Internet service from CenturyLink you get to choose a speed that will suit your budget and your lifestyle demands. You’ll have no trouble connecting all the phones, computers, and tablets in your Bountiful home. And, on top of that, CenturyLink High-Speed Internet in Bountiful includes 11 free email accounts and anytime access to your account online so that it is easy to manage your service. You can even watch your favorite TV shows on the CenturyLink home page. Check out CenturyLink’s 3 Year Price-Lock Guarantee to get the most out of your service. Give us a call today and set up CenturyLink High-Speed Internet in your Bountiful home. Make the call today. The number is 1-801-871-3529.
Loaded with advanced calling features, home phone from CenturyLink in Bountiful will change the way you communicate. CenturyLink home phone makes it easy to keep in touch with friends and family without using up your cell phone minutes. Plus, you’ll never have to worry about dropped calls, bad reception, or missing coverage. CenturyLink offers Bountiful residents unlimited local calling—and customers can upgrade to Unlimited Nationwide calling for only a few more dollars a month. CenturyLink also offers international and multilingual calling plans so you can stay in touch no matter where your loved ones live. Sign up for a home phone from CenturyLink today.
CenturyLink offers home service bundles that combine the best home phone, TV, and Internet options at surprisingly low cost. Many of CenturyLink’s most popular bundles in Bountiful come with a 3 Year Price Lock Guarantee—so you can pay the same amount each month for the duration of your contract. And if you combine DIRECTV service with CenturyLink you’ll enjoy privileged access to DIRECTV’s popular NFL SUNDAY TICKET. When you upgrade your Bountiful home with CenturyLink bundles, you’ll have control over your options that allows you to get exactly what you want. Choose the features you care about, make sure the channels you love are included, and pick an Internet speed that will meet your specific online demands. Choose a CenturyLink bundle that combines the services you need at a price you can afford.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import os
__all__ = ['is_executable', 'find_executable_path', 'file_size']
def is_executable(path):
return os.path.isfile(path) and os.access(path, os.X_OK)
def find_executable_path(command, paths=None):
path = os.environ.get('%s_PATH' % command.upper())
if not path:
if not paths:
# Most commonly used paths to install software
paths = ['/usr/bin/%s', '/usr/local/bin/%s', '/bin/%s']
paths = [p % command for p in paths]
for p in paths:
if is_executable(p):
path = p
break
return path or command
def file_size(fp):
# File descriptor
if hasattr(fp, 'name') and os.path.exists(fp.name):
return os.path.getsize(fp.name)
# File name
if type(fp) == type('') and os.path.exists(fp):
return os.path.getsize(fp)
# File buffer
if hasattr(fp, 'seek') and hasattr(fp, 'tell'):
pos = fp.tell()
fp.seek(0, os.SEEK_END)
size = fp.tell()
fp.seek(pos)
return size
# File wrapper, e.g Django File object
if hasattr(fp, 'size'):
return fp.size
raise ValueError("Unable to determine the file's size: %s" % (fp, ))
|
Enter your information below to download!
You'll also get promotional emails from us. You can unsubscribe at any time.
Arrgghh!! Talk Like a Pirate Day may be but once a year, but ye salty dogs deserve the finest pirate clothin' for yer portraits all year round!
Sign on the dotted line to get yer free Pirate Sampler PDF!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.