text
stringlengths 29
850k
|
|---|
from .locals import get_cid
class CidCursorWrapper(object):
"""
A cursor wrapper that attempts to add a cid comment to each query
"""
def __init__(self, cursor):
self.cursor = cursor
def __getattr__(self, attr):
if attr in self.__dict__:
return self.__dict__[attr]
else:
return getattr(self.cursor, attr)
def __iter__(self):
return iter(self.cursor)
def __enter__(self):
return self
def __exit__(self, type, value, traceback):
self.close()
def add_comment(self, sql):
cid = get_cid()
if cid:
cid = cid.replace('/*', '\/\*').replace('*/', '\*\/')
return "/* cid: {} */\n{}".format(cid, sql)
return sql
# The following methods cannot be implemented in __getattr__, because the
# code must run when the method is invoked, not just when it is accessed.
def callproc(self, procname, params=None):
return self.cursor.callproc(procname, params)
def execute(self, sql, params=None):
sql = self.add_comment(sql)
return self.cursor.execute(sql, params)
def executemany(self, sql, param_list):
sql = self.add_comment(sql)
return self.cursor.executemany(sql, param_list)
|
Discover gorgeous Sam fine art prints. Fast and reliable shipping. 100% satisfaction guarantee.
|
# Copyright (c) 2017 Alfredo de la fuente <alfredodelafuente@avanzosc.es>
# License AGPL-3.0 or later (http://www.gnu.org/licenses/agpl).
from odoo import api, fields, models
class StockPicking(models.Model):
_inherit = 'stock.picking'
analytic_account_id = fields.Many2one(
comodel_name='account.analytic.account', string='Analytic account',
states={'done': [('readonly', True)], 'cancel': [('readonly', True)]})
@api.onchange('analytic_account_id')
def onchange_analytic_account_id(self):
for picking in self.filtered(lambda x: x.analytic_account_id and
x.analytic_account_id.partner_id):
picking.partner_id = picking.analytic_account_id.partner_id.id
class StockMove(models.Model):
_inherit = 'stock.move'
def _action_done(self):
moves = super(StockMove, self)._action_done()
for move in moves.filtered(
lambda x: x.picking_id and
x.picking_id.analytic_account_id and
x.picking_id.picking_type_code in ('outgoing', 'incoming')):
vals = move._prepare_data_for_create_analytic_line()
self.env['account.analytic.line'].create(vals)
return moves
def _prepare_data_for_create_analytic_line(self):
self.ensure_one()
vals = {
'stock_move_id': self.id,
'account_id': self.picking_id.analytic_account_id.id,
'partner_id': self.picking_id.partner_id.id,
'product_id': self.product_id.id,
'product_uom_id': self.product_uom.id,
'unit_amount': self.product_qty,
'amount': self.product_qty * self.price_unit,
'name': u"{} {}".format(self.picking_id.name, self.name),
}
return vals
|
While I sometimes talk about playing against type and breaking out of your safe space, there is a lot to be said for playing in your comfort zone. Certainly, you should not dismiss a different type of role out of turn (such as a thinker when you usually play a brute). Allow yourself the opportunity to break in the new role, just like you might a new pair of shoes. Like shoes, however, you can be relatively certain when you slip one on your foot if it’s way too big or way too tight, other times, you might try a pair on and they may feel alright until you start walking or running in them. At such a point, they may blister you up or fly off into the fields. You never know.
There is a lot to be said, after all, for buying a pair of shoes like the ones you already owned. There are ways to try out a new role which you can discuss with your GM. I’ll get to those after I briefly mention how good modern gamers have things.
Back in the olden days, once you had a character, you were stuck with him. FOREVER. Even if he was bad. Escapism was sacrificed on the altar of realism (though it gets sticky to talk about said altar) just enough for you to possibly have a truly lame character or one lame decision by you causes your character to be forever stuck in the wrong role. Like if you were a complete noob and no one helped you and you made a wizard even though your Int was 9 (or something like that).
Nowadays, there are kinder and gentler players and GMs and you can often take a character out for a test spin before settling into the role. The best thing to do is talk it over with everyone to make sure everyone’s on the same page. It’s not about min-maxing either. As a matter of fact, it should be far from it, in my estimation. There are some characters which don’t fit into a group or the direction the GM may want to take the game, but that’s really a discussion for another day. Remember: don’t suffer. Make sure your shoes fit.
I’ve often been at odds with some players because they want to own a closet of shoes and not stick to one pair. What do you say to players who look for that “perfect” character? I have one who is an awesome player, but feels unhappy with a strong concept character if said character isn’t useful in all situations.
We have a standing rule in our campaign home games. You get three sessions to decide on if you like a character or you may make changes to the character anytime until three sessions are complete. After that you are stuck.
Certainly, it can be abused. I know folks who want mechanical utility in all situations, but that’s unreasonable (especially starting out). If your group is sizable enough, you should explain the importance of spotlight moments for all characters.
You can also explain mechanical utility should not inhibit role playing. Just as in real life, there are those who can do things and others who direct or come up with ideas/concepts. Utility is a matter of perspective.
Typically, I think characters should be settled by the end of the second session, or, at least, prior to the third or it feels a bit like cheating.
I should note, we generally allow changes between sessions or with an open dialogue between player and GM.
|
import uuid
from pdflib import Document
from followthemoney import model
from normality import collapse_spaces # noqa
from ingestors.support.ocr import OCRSupport
from ingestors.support.convert import DocumentConvertSupport
class PDFSupport(DocumentConvertSupport, OCRSupport):
"""Provides helpers for PDF file context extraction."""
def pdf_extract(self, entity, pdf):
"""Extract pages and page text from a PDF file."""
entity.schema = model.get('Pages')
temp_dir = self.make_empty_directory()
for page in pdf:
self.pdf_extract_page(entity, temp_dir, page)
def pdf_alternative_extract(self, entity, pdf_path):
checksum = self.manager.store(pdf_path)
entity.set('pdfHash', checksum)
pdf = Document(bytes(pdf_path))
self.pdf_extract(entity, pdf)
def pdf_extract_page(self, document, temp_dir, page):
"""Extract the contents of a single PDF page, using OCR if need be."""
texts = page.lines
image_path = temp_dir.joinpath(str(uuid.uuid4()))
page.extract_images(path=bytes(image_path), prefix=b'img')
languages = self.manager.context.get('languages')
for image_file in image_path.glob("*.png"):
with open(image_file, 'rb') as fh:
data = fh.read()
text = self.extract_ocr_text(data, languages=languages)
if text is not None:
texts.append(text)
text = ' \n'.join(texts).strip()
entity = self.manager.make_entity('Page')
entity.make_id(document.id, page.page_no)
entity.set('document', document)
entity.set('index', page.page_no)
entity.add('bodyText', text)
self.manager.emit_entity(entity)
self.manager.emit_text_fragment(document, text, entity.id)
|
Dundee United have signed winger Tope Obadeyi on a one-year contract following his departure from Kilmarnock.
The 26-year-old, whose former clubs include Bolton, Rio Ave, Plymouth and Bury, joins Stewart Murdoch (Ross County), Cammy Bell (Rangers) and Lewis Toshney (Raith Rovers) at Tannadice.
United boss Ray McKinnon told the club website: "I am delighted to get Tope. He’s a strong, powerful and quick player. He scored 10 goals in the Premiership last year and he will bring lots to our team. We will need big physical players and he fits the bill."
|
#!/usr/bin/env python2.6
# Copyright (c) 2010 Joshua Harlan Lifton.
# See LICENSE.txt for details.
from distutils.core import setup
from plover import __version__
from plover import __description__
from plover import __long_description__
from plover import __url__
from plover import __download_url__
from plover import __license__
setup(name='plover',
version=__version__,
description=__description__,
long_description=__long_description__,
url=__url__,
download_url=__download_url__,
license=__license__,
author='Joshua Harlan Lifton',
author_email='joshua.harlan.lifton@gmail.com',
maintainer='Joshua Harlan Lifton',
maintainer_email='joshua.harlan.lifton@gmail.com',
package_dir={'plover':'plover'},
packages=['plover', 'plover.machine', 'plover.gui', 'plover.oslayer',
'plover.dictionary'],
package_data={'plover' : ['assets/*']},
data_files=[('/usr/share/applications', ['application/Plover.desktop']),
('/usr/share/pixmaps', ['plover/assets/plover_on.png']),],
scripts=['application/plover'],
requires=['serial', 'Xlib', 'wx', 'appdirs', 'wxversion'],
platforms=['GNU/Linux'],
classifiers=['Programming Language :: Python',
'License :: OSI Approved :: GNU General Public License (GPL)',
'Development Status :: 4 - Beta',
'Environment :: X11 Applications',
'Intended Audience :: End Users/Desktop',
'Natural Language :: English',
'Operating System :: POSIX :: Linux',
'Topic :: Adaptive Technologies',
'Topic :: Desktop Environment',]
)
|
A personal hotel service is an absolute priority to Jericho Hotel. Our entire team will cater to your every need during your stay. You can count on our expertise and helpful service from the moment you make your hotel reservation. We will give you a warm welcome when you arrive at our hotel. We will inform you of our wide range of hotel facilities as you settle into our warm, homely atmosphere. In short, our impeccable hotel service will always be accompanied with a bright smile and will make your stay an unforgettable experience.
Get a comfortable hotel room and experience pure bliss. To make you feel extra welcome, some fine guest supplies will be waiting for you in your hotel room, such as scented soap, shower gel and shampoo.
We offer a free parking to all our Hotel Guests. When you arrive, we will immediately check availability on our hotel parking.
Our meeting rooms are fully equipped to make your meetings as comfortable as possible. For extra comfort, you will have plenty of natural light and a peaceful view of our pool and garden.
Jericho Hotel has 24 beautiful rooms on different floors, all non-smoking. If you would like to book a room in our Jericho hotel, we will be happy to reserve it for you.
Enjoy a delicious full breakfast during your stay at Jericho Hotel. From the moment you sit down at your table in our pleasant breakfast room, our full breakfast range is there for you to enjoy.
If you are planning a midweek, weekend break or a business trip, all you need to do is find the perfect hotel Jericho Hotel will offer all the comfort you need and a very friendly service in an authentic, stylish setting. You will like its privileged location which is close to the dam.
Our friendly and professional team is ready to serve you with a smile.
Check in: 14:00 - 22:00 Daily.
Cannot get Thohoyandou location id in module mod_sp_weather. Please also make sure that you have inserted city name.
|
## """
## Copyright(C) 2011-2012 The Board of Trustees of the University of Illinois.
## All rights reserved.
##
## Developed by: Roger D. Serwy
## University of Illinois
##
## Permission is hereby granted, free of charge, to any person obtaining
## a copy of this software and associated documentation files (the
## "Software"), to deal with the Software without restriction, including
## without limitation the rights to use, copy, modify, merge, publish,
## distribute, sublicense, and/or sell copies of the Software, and to
## permit persons to whom the Software is furnished to do so, subject to
## the following conditions:
##
## + Redistributions of source code must retain the above copyright
## notice, this list of conditions and the following disclaimers.
## + Redistributions in binary form must reproduce the above copyright
## notice, this list of conditions and the following disclaimers in the
## documentation and/or other materials provided with the distribution.
## + Neither the names of Roger D. Serwy, the University of Illinois, nor
## the names of its contributors may be used to endorse or promote
## products derived from this Software without specific prior written
## permission.
##
## THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
## OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
## MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
## IN NO EVENT SHALL THE CONTRIBUTORS OR COPYRIGHT HOLDERS BE LIABLE FOR
## ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF
## CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH
## THE SOFTWARE OR THE USE OR OTHER DEALINGS WITH THE SOFTWARE.
##
import sys
if sys.version < '3':
from StringIO import StringIO
from Tkinter import *
import tkFileDialog
import tkMessageBox
else:
from io import StringIO
from tkinter import *
import tkinter.filedialog as tkFileDialog
import tkinter.messagebox as tkMessageBox
import imp
try:
import importlib
HAS_IMPORTLIB = True
except ImportError:
HAS_IMPORTLIB = False
from idlelib.configHandler import idleConf, IdleConfParser
import os
def make_config_parser(cfg):
""" Stuff Configration String into a fake file and return an IDLE config parser """
fp = StringIO()
fp.write(cfg)
fp.write('\n')
fp.seek(0)
# parse the configuration from the fake file
confparse = IdleConfParser('')
try:
confparse.readfp(fp)
except BaseException as e:
print('\n Configuration Parse Error', e)
return None
return confparse
class ExtensionManager(object):
""" Manages extensions for IdleX
"""
def __init__(self, path):
head,tail = os.path.split(path)
self.extension_dir = head
self.IDLEX_EXTENSIONS = self.get_idlex_extensions(head)
IDLE_EXTENSIONS = [] # A list of default extensions in IDLE - those that come with the standard distribution
for i in idleConf.defaultCfg['extensions'].sections():
if i.endswith('_cfgBindings') or i.endswith('_bindings'):
continue
IDLE_EXTENSIONS.append(i)
self.IDLE_EXTENSIONS = IDLE_EXTENSIONS
def get_idlex_extensions(self, directory):
""" Get a list of user extensions from 'directory' """
contents = os.listdir(directory)
contents.sort()
contents = [x for x in contents if not x.startswith('_')]
user_extensions = []
for i in contents:
fullpath = os.path.join(directory, i)
if fullpath.endswith('.py') \
and os.path.isfile(fullpath):
try:
txt = open(fullpath, 'r').read(1000)
except IOError:
print(' IOError while loading extension: %r' % fullpath)
if '# IDLEX EXTENSION' in txt:
name = i[:-3] # truncate .py
user_extensions.append(name)
else:
print(' Not an IdleX extension: %r' % fullpath)
return user_extensions
def load_extension(self, name):
""" Imports an extension by name and returns a reference to the module.
Invalid modules return None.
"""
fullname = 'extensions.%s' % name
try:
if HAS_IMPORTLIB:
mod = importlib.import_module('.' + fullname, package=__package__)
else:
mod = __import__(fullname, globals(), locals(), [''], 1)
except Exception as err:
import traceback
traceback.print_exc()
mod = None
return mod
def find_extension(self, name):
""" Locates an extension """
path = self.extension_dir
info = imp.find_module(name, [path])
def load_extension_cfg(self, extName):
""" Load the extension. get its default config string
from the "config_extension_def" variable."""
mod = self.load_extension(extName)
if mod is None:
print("could not load %s" % extName)
return
if hasattr(mod, "config_extension_def"):
return mod.config_extension_def
else:
print("\n Missing 'config_extension_def' in %s. Not loading." % extName)
return None
def copy_options(self, name, cfgdict, confparse, blank=False):
d = cfgdict["extensions"]
optionlist = confparse.GetOptionList(name)
for option in optionlist:
try:
value = confparse.get(name, option, raw=True)
except BaseException as e:
print(' Error during extension settings copy:\n', e)
return False
if not d.has_section(name):
d.add_section(name)
if not blank:
d.set(name, option, value)
else:
d.set(name, option, '')
return True
def transfer_cfg(self, extName, confparse, keys=True):
""" Transfer the configuration from the extension
into IDLE's configuration. Returns True if successful. """
if confparse is None:
return False
# copy the user extension configuration in IDLE
retval = self.copy_options(extName, idleConf.userCfg, confparse)
if 0: # DEVELOPERS - this takes a long time to process
# Report Any keybinding conflicts the user extension may have
keyset = idleConf.GetCurrentKeySet()
name_cfg = extName+'_cfgBindings'
optionlist = confparse.GetOptionList(name_cfg)
for option in optionlist:
b = '<<%s>>' % option
value = confparse.get(name_cfg, option)
if value == '<Control-Key-l>': continue # WORKAROUND: skip clear window binding
for event, binding in list(keyset.items()):
if value in binding and event != b and value:
print('\n Warning: [%s] has an event binding conflict with' % name_cfg)
print(' ', event, value)
# idleConf.GetExtensionBindings pulls only from the default configuration.
# Must transfer bindings to defaultCfg dictionary instead.
if keys:
self.copy_options(extName+'_cfgBindings', idleConf.defaultCfg,
confparse)
return retval
def load_idlex_extensions(self, userExt=None):
""" Load extensions. Returns number of extensions loaded. """
if userExt is None:
userExt = self.IDLEX_EXTENSIONS
# get already-saved settings
d = idleConf.GetUserCfgDir()
usercfgfile = os.path.join(d, 'idlex-config-extensions.cfg')
if os.path.isfile(usercfgfile):
U = open(usercfgfile).read()
else:
U = ''
count = 0
userConfParser = make_config_parser(U)
key_isdefault = idleConf.GetOption('main','Keys','default', type="bool")
for extName in userExt:
if self.reload_cfg(extName):
count += 1
# transfer already-saved settings, otherwise IDLE forgets them
# when idleConf.SaveUserCfgFiles is called from within IDLE. Bug?
self.transfer_cfg(extName, userConfParser,
keys=not key_isdefault) # Overwrite defaults with user config
idleConf.SaveUserCfgFiles()
return count
def reload_cfg(self, extName):
# get the default configuration for the individual extension
cfg = self.load_extension_cfg(extName)
if cfg is None:
return False
# shove the conf string into a ConfigParse object
extConfParser = make_config_parser(cfg)
if extConfParser is None:
print('\n Unable to parse configuration for %s' % extName)
return False
# transfer the configuration to IDLE
if not self.transfer_cfg(extName, extConfParser, keys=True):
print('\n Unable to transfer configuration for %s' % extName)
return False
return True
try:
from . import extensions
except (ImportError, ValueError) as err:
import extensions
path = extensions.__file__
extensionManager = ExtensionManager(path)
|
Come in during our open hours, or you can sign up at this link.
The online form requires you to attach an image of your photo ID with your current RI address or another form of identification plus proof of residency. This form can also be printed and faxed or mailed in.
|
'''
Created on 23 Sep 2015
@author: peterb
'''
from pika import adapters
import pika
import logging
import os
class PikaBroadcaster(object):
"""This is an example consumer that will handle unexpected interactions
with RabbitMQ such as channel and connection closures.
If RabbitMQ closes the connection, it will reopen it. You should
look at the output, as there are limited reasons why the connection may
be closed, which usually are tied to permission related issues or
socket timeouts.
If the channel is closed, it will indicate a problem with one of the
commands that were issued and that should surface in the output as well.
"""
EXCHANGE = 'chat-messages'
EXCHANGE_TYPE = 'fanout'#'topic'
QUEUE = 'chat'
ROUTING_KEY = ''
def __init__(self, amqp_url=None):
"""Create a new instance of the consumer class, passing in the AMQP
URL used to connect to RabbitMQ.
:param str amqp_url: The AMQP url to connect with
"""
self._clients = None
self._connection = None
self._channel = None
self._closing = False
self._consumer_tag = None
self._url = amqp_url
def set_clients(self, clients):
"""used to call clients"""
self._clients = clients
def connect(self):
"""This method connects to RabbitMQ, returning the connection handle.
When the connection is established, the on_connection_open method
will be invoked by pika.
:rtype: pika.SelectConnection
"""
logging.info('Connecting to %s', self._url)
return adapters.TornadoConnection(pika.URLParameters(self._url),
self.on_connection_open)
def close_connection(self):
"""This method closes the connection to RabbitMQ."""
logging.info('Closing connection')
self._connection.close()
def add_on_connection_close_callback(self):
"""This method adds an on close callback that will be invoked by pika
when RabbitMQ closes the connection to the publisher unexpectedly.
"""
logging.info('Adding connection close callback')
self._connection.add_on_close_callback(self.on_connection_closed)
def on_connection_closed(self, connection, reply_code, reply_text):
"""This method is invoked by pika when the connection to RabbitMQ is
closed unexpectedly. Since it is unexpected, we will reconnect to
RabbitMQ if it disconnects.
:param pika.connection.Connection connection: The closed connection obj
:param int reply_code: The server provided reply_code if given
:param str reply_text: The server provided reply_text if given
"""
self._channel = None
if self._closing:
self._connection.ioloop.stop()
else:
logging.warning('Connection closed, reopening in 5 seconds: (%s) %s',
reply_code, reply_text)
self._connection.add_timeout(5, self.reconnect)
def on_connection_open(self, unused_connection):
"""This method is called by pika once the connection to RabbitMQ has
been established. It passes the handle to the connection object in
case we need it, but in this case, we'll just mark it unused.
:type unused_connection: pika.SelectConnection
"""
logging.info('Connection opened')
self._connection = unused_connection
self.add_on_connection_close_callback()
self.open_channel()
def reconnect(self):
"""Will be invoked by the IOLoop timer if the connection is
closed. See the on_connection_closed method.
"""
if not self._closing:
# Create a new connection
self._connection = self.connect()
def add_on_channel_close_callback(self):
"""This method tells pika to call the on_channel_closed method if
RabbitMQ unexpectedly closes the channel.
"""
logging.info('Adding channel close callback')
self._channel.add_on_close_callback(self.on_channel_closed)
def on_channel_closed(self, channel, reply_code, reply_text):
"""Invoked by pika when RabbitMQ unexpectedly closes the channel.
Channels are usually closed if you attempt to do something that
violates the protocol, such as re-declare an exchange or queue with
different parameters. In this case, we'll close the connection
to shutdown the object.
:param pika.channel.Channel: The closed channel
:param int reply_code: The numeric reason the channel was closed
:param str reply_text: The text reason the channel was closed
"""
logging.warning('Channel %i was closed: (%s) %s',
channel, reply_code, reply_text)
self._connection.close()
def on_channel_open(self, channel):
"""This method is invoked by pika when the channel has been opened.
The channel object is passed in so we can make use of it.
Since the channel is now open, we'll declare the exchange to use.
:param pika.channel.Channel channel: The channel object
"""
logging.info('Channel opened')
self._channel = channel
self.add_on_channel_close_callback()
self.setup_exchange(self.EXCHANGE)
def setup_exchange(self, exchange_name):
"""Setup the exchange on RabbitMQ by invoking the Exchange.Declare RPC
command. When it is complete, the on_exchange_declareok method will
be invoked by pika.
:param str|unicode exchange_name: The name of the exchange to declare
"""
logging.info('Declaring exchange %s', exchange_name)
self._channel.exchange_declare(self.on_exchange_declareok,
exchange_name,
self.EXCHANGE_TYPE)
def on_exchange_declareok(self, unused_frame):
"""Invoked by pika when RabbitMQ has finished the Exchange.Declare RPC
command.
:param pika.Frame.Method unused_frame: Exchange.DeclareOk response frame
"""
logging.info('Exchange declared')
self.setup_queue(self.QUEUE)
def setup_queue(self, queue_name):
"""Setup the queue on RabbitMQ by invoking the Queue.Declare RPC
command. When it is complete, the on_queue_declareok method will
be invoked by pika.
:param str|unicode queue_name: The name of the queue to declare.
"""
logging.info('Declaring queue %s', queue_name)
self._channel.queue_declare(self.on_queue_declareok, exclusive=True)
def on_queue_declareok(self, method_frame):
"""Method invoked by pika when the Queue.Declare RPC call made in
setup_queue has completed. In this method we will bind the queue
and exchange together with the routing key by issuing the Queue.Bind
RPC command. When this command is complete, the on_bindok method will
be invoked by pika.
:param pika.frame.Method method_frame: The Queue.DeclareOk frame
"""
self.QUEUE = method_frame.method.queue
logging.info('Binding %s to %s with %s',
self.EXCHANGE, self.QUEUE, self.ROUTING_KEY)
self._channel.queue_bind(self.on_bindok, self.QUEUE,
self.EXCHANGE, self.ROUTING_KEY)
def add_on_cancel_callback(self):
"""Add a callback that will be invoked if RabbitMQ cancels the consumer
for some reason. If RabbitMQ does cancel the consumer,
on_consumer_cancelled will be invoked by pika.
"""
logging.info('Adding consumer cancellation callback')
self._channel.add_on_cancel_callback(self.on_consumer_cancelled)
def on_consumer_cancelled(self, method_frame):
"""Invoked by pika when RabbitMQ sends a Basic.Cancel for a consumer
receiving messages.
:param pika.frame.Method method_frame: The Basic.Cancel frame
"""
logging.info('Consumer was cancelled remotely, shutting down: %r',
method_frame)
if self._channel:
self._channel.close()
def acknowledge_message(self, delivery_tag):
"""Acknowledge the message delivery from RabbitMQ by sending a
Basic.Ack RPC method for the delivery tag.
:param int delivery_tag: The delivery tag from the Basic.Deliver frame
"""
logging.info('Acknowledging message %s', delivery_tag)
self._channel.basic_ack(delivery_tag)
def on_message(self, unused_channel, basic_deliver, properties, body):
"""Invoked by pika when a message is delivered from RabbitMQ. The
channel is passed for your convenience. The basic_deliver object that
is passed in carries the exchange, routing key, delivery tag and
a redelivered flag for the message. The properties passed in is an
instance of BasicProperties with the message properties and the body
is the message that was sent.
:param pika.channel.Channel unused_channel: The channel object
:param pika.Spec.Basic.Deliver: basic_deliver method
:param pika.Spec.BasicProperties: properties
:param str|unicode body: The message body
"""
logging.info('Received message # %s from %s: %s',
basic_deliver.delivery_tag, properties.app_id, body)
if self._clients:
for client in self._client:
client.write_message(body)
self.acknowledge_message(basic_deliver.delivery_tag)
def on_cancelok(self, unused_frame):
"""This method is invoked by pika when RabbitMQ acknowledges the
cancellation of a consumer. At this point we will close the channel.
This will invoke the on_channel_closed method once the channel has been
closed, which will in-turn close the connection.
:param pika.frame.Method unused_frame: The Basic.CancelOk frame
"""
logging.info('RabbitMQ acknowledged the cancellation of the consumer')
self.close_channel()
def stop_consuming(self):
"""Tell RabbitMQ that you would like to stop consuming by sending the
Basic.Cancel RPC command.
"""
if self._channel:
logging.info('Sending a Basic.Cancel RPC command to RabbitMQ')
self._channel.basic_cancel(self.on_cancelok, self._consumer_tag)
def start_consuming(self):
"""This method sets up the consumer by first calling
add_on_cancel_callback so that the object is notified if RabbitMQ
cancels the consumer. It then issues the Basic.Consume RPC command
which returns the consumer tag that is used to uniquely identify the
consumer with RabbitMQ. We keep the value to use it when we want to
cancel consuming. The on_message method is passed in as a callback pika
will invoke when a message is fully received.
"""
logging.info('Issuing consumer related RPC commands')
self.add_on_cancel_callback()
self._consumer_tag = self._channel.basic_consume(self.on_message,
self.QUEUE)
def on_bindok(self, unused_frame):
"""Invoked by pika when the Queue.Bind method has completed. At this
point we will start consuming messages by calling start_consuming
which will invoke the needed RPC commands to start the process.
:param pika.frame.Method unused_frame: The Queue.BindOk response frame
"""
logging.info('Queue bound')
self.start_consuming()
def close_channel(self):
"""Call to close the channel with RabbitMQ cleanly by issuing the
Channel.Close RPC command.
"""
logging.info('Closing the channel')
self._channel.close()
def open_channel(self):
"""Open a new channel with RabbitMQ by issuing the Channel.Open RPC
command. When RabbitMQ responds that the channel is open, the
on_channel_open callback will be invoked by pika.
"""
logging.info('Creating a new channel')
self._connection.channel(on_open_callback=self.on_channel_open)
def post(self, msg):
if self._channel:
self._channel.basic_publish(self.EXCHANGE,routing_key=self.ROUTING_KEY,body=msg)
|
DE PERE - St. Norbert College, ranked No. 14 in the D3hoops.com Top 25, makes its tenth NCAA Division III Tournament appearance when it faces No. 3 UW-Stevens Point at 7 p.m. Friday at Berg Gym.
Simpson College (21-6) and Concordia-Moorhead College (21-6) play at 5 p.m. Friday. The two winners advance to the regional championship at 7 p.m. Saturday.
|
#!/usr/local/bin/python3
# Libraries are in parent directory
import sys
sys.path.append('../')
import numpy as np
import scipy
import time, csv, math, collections
from dtrw import *
# Local fit functions for a variety of scripts
from fit_functions import *
import mpmath
import scipy.integrate
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
from matplotlib import animation
from matplotlib import cm
from matplotlib.backends.backend_pdf import PdfPages
import pdb
output_pdf = sys.argv[1]
def append_string_as_int(array, item):
try:
array = np.append(array, np.int32(item))
except ValueError:
array = np.append(array, np.nan)
return array
def append_string_as_float(array, item):
try:
array = np.append(array, np.float64(item))
except ValueError:
array = np.append(array, np.nan)
return array
labels = []
image_index = []
cervix = []
EDTA = []
p24 = np.array([], dtype=np.int32)
virions = np.array([], dtype=np.int32)
penetrators = np.array([], dtype=np.int32)
depth = np.array([], dtype=np.float64)
no_mucous_data = 'SMEG_Data/NeuraminidaseNOBAFLinear.csv'
with_mucous_data = 'SMEG_Data/PenetrationMLoadnewestOMITAngelafixed.csv'
EDTA_data = 'SMEG_Data/EctoCervixEDTABaLAngelafixed.csv'
with open(EDTA_data, 'rb') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
labels = next(reader)
for row in reader:
image_index.append(row[0])
cervix.append(row[1])
EDTA.append(row[2])
p24 = append_string_as_int(p24, row[3])
virions = append_string_as_int(virions, row[4])
penetrators = append_string_as_int(penetrators, row[5])
depth = append_string_as_float(depth, row[6])
count = collections.Counter(image_index)
image_yes = collections.Counter([image_index[i] for i in range(len(image_index)) if EDTA[i] == 'Y'])
image_no = collections.Counter([image_index[i] for i in range(len(image_index)) if EDTA[i] == 'N'])
# The number of sites we analyse...
number_of_sites = 10
# Pick out the most common sites
sites = count.most_common(number_of_sites)
sites_yes = image_yes.most_common(number_of_sites)
sites_no = image_no.most_common(number_of_sites)
pp = PdfPages(output_pdf + '{0}.pdf'.format(sys.argv[2]))
#for site in [sites_yes, sites_no]:
# for site in sites:
if sys.argv[3] == 'Y':
site = sites_yes[int(sys.argv[2])]
else:
site = sites_no[int(sys.argv[2])]
# All locations of this particular image
img_loc = [i for i,x in enumerate(image_index) if x == site[0]]
depth_loc = depth[img_loc]
# Ok, data loaded, now lets get to business. Prune zeros and NaNs from depth
# (may want to infact double check a 0.0 depth is valid if it's seen as a penetrator)
nz_depth = depth_loc[np.nonzero(depth_loc)]
nz_depth = nz_depth[~np.isnan(nz_depth)]
num_depth_bins = 20
# Depth Histogram
depth_hist, depth_bins = np.histogram(nz_depth, num_depth_bins, density=True)
bin_cent = (depth_bins[1:]+depth_bins[:-1])/2.0
# Depth based survival function - sometimes a better function to fit to, and we certainly don't lose resolution
surv_func = scipy.stats.itemfreq(nz_depth-1.0)
surv_func_x = surv_func[:,0]
surv_func_y = 1.0 - np.insert(np.cumsum(surv_func[:,1]), 0, 0.0)[:-1] / surv_func[:,1].sum()
if surv_func_x[0] != 0.0:
surv_func_x = np.insert(surv_func_x, 0, 0.0)
surv_func_y = np.insert(surv_func_y, 0, 1.0)
T = 4.0
L = surv_func_x.max() #nz_depth.max()
dX = L / 100.0
D_alpha = 20.0
alpha = 0.75
# Last minimisation got close to:
#diff_fit = [ 5.28210775, 0.95847065]
#subdiff_fit = [ 15.07811124, 0.55, 0.99997347]
xs = np.arange(0.0, L+dX, dX)
#
# FIT Diffusion model - analytic
#
diff_init_params = [D_alpha]
diff_fit = scipy.optimize.fmin_slsqp(lsq_diff, diff_init_params, args=(T, surv_func_x, surv_func_y), \
bounds=[(0.0, np.Inf)], epsilon = 1.0e-8, acc=1.0e-6, full_output=True)
diff_sq_err = diff_fit[1]
diff_fit = diff_fit[0]
print('Diffusion fit parameters:', diff_fit)
diff_analytic_soln_survival = produce_diff_soln_survival(diff_fit, T, xs)
diff_analytic_soln = produce_diff_soln(diff_fit, T, xs)
#
# FIT Subdiffusion model - numerical (DTRW algorithm)
#
#history_truncation = 0
# New regime: start at diff parameter fit
#subdiff_init_params = [diff_fit[0], alpha]
#subdiff_fit = scipy.optimize.fmin_slsqp(lsq_subdiff, subdiff_init_params, args=(T, 4.0 * L, dX, surv_func_x, surv_func_y, history_truncation), \
# bounds=[(0.0, 50.0),(0.51, 1.0)], epsilon = 1.0e-3, acc=1.0e-6, full_output=True)
#subdiff_sq_err = subdiff_fit[1]
#subdiff_fit = subdiff_fit[0]
#print 'Subdiffusion fit parameters:', subdiff_fit
#dtrw_sub_soln = produce_subdiff_soln(subdiff_fit, T, 4.0*L, dX)
#dtrw_sub_soln_survival = produce_subdiff_soln_survival(subdiff_fit, T, 4.0*L, dX)
#
# FIT Subdiffusion model - analytic
#
subdiff_anal_init_params = [D_alpha]
subdiff_anal_fit = scipy.optimize.fmin_slsqp(lsq_subdiff_analytic, subdiff_anal_init_params, args=(T, surv_func_x, surv_func_y), \
bounds=[(0.0, np.Inf)], epsilon = 1.0e-3, acc=1.0e-6, full_output=True)
subdiff_anal_sq_err = subdiff_anal_fit[1]
subdiff_anal_fit = subdiff_anal_fit[0]
print('Subdiffusion analytic fit parameters:', subdiff_anal_fit)
anal_sub_soln = produce_subdiff_analytic_soln(subdiff_anal_fit, T, xs)
anal_sub_soln_survival = produce_subdiff_analytic_survival(subdiff_anal_fit, T, xs)
#
# FIT Exponential... for fun
#
slope, offset = np.linalg.lstsq(np.vstack([surv_func_x, np.ones(len(surv_func_x))]).T, np.log(surv_func_y).T)[0]
exp_fit = np.exp(offset + xs * slope)
#
# PLOT IT ALL
#
fig = plt.figure(figsize=(16,8))
ax1 = fig.add_subplot(1, 2, 1)
bar1, = ax1.plot(surv_func_x, surv_func_y, 'b.-')
#line1, = ax1.plot(xs, dtrw_sub_soln_survival.T[:xs.size], 'r.-')
line2, = ax1.plot(xs, anal_sub_soln_survival, 'y.-')
line3, = ax1.plot(xs, diff_analytic_soln_survival, 'g.-')
line4, = ax1.plot(xs, exp_fit, 'b')
ax1.set_title('Survival function vs fits, ' + site[0] + ', {0} virions'.format(site[1]))
ax2 = fig.add_subplot(1, 2, 2)
ax2.semilogy(surv_func_x, surv_func_y, 'b.-')
#ax2.semilogy(xs, dtrw_sub_soln_survival.T[:xs.size], 'r.-')
ax2.semilogy(xs, anal_sub_soln_survival, 'y.-')
ax2.semilogy(xs, diff_analytic_soln_survival, 'g.-')
ax2.semilogy(xs, exp_fit, 'b')
ax2.set_title('Logarithm of survival function vs fits, ' + site[0] + ', {0} virions'.format(site[1]))
#plt.legend([bar1, line1, line2, line3, line4], ["Viral survival func", "Subdiffusion fit, alpha={0:.2f}, D_alpha={1:.2f}, sq_err={2:.4f}".format(subdiff_fit[1],subdiff_fit[0],subdiff_sq_err), \
plt.legend([bar1, line2, line3, line4], ["Viral survival func", \
"Analytic subdiff fit, alpha=1/2, D_alpha={0:.2f}, sq_err={1:.4f}".format(subdiff_anal_fit[0], subdiff_anal_sq_err), \
"Diffusion fit, D_alpha={0:.2f}, sq_err={1:.2f}".format(diff_fit[0], diff_sq_err), "Exponential fit"], loc=3)
pp.savefig()
pp.close()
#plt.show()
|
About once a week over the past year I have profiled jazz albums from different artists to provide an admittedly personal and idiosyncratic Top 50 (in no particular order). My choices were driven in large part by my preference for relatively straight-ahead jazz from the swing and bebop eras, and especially the music from the mid-1950s/early 1960s that evolved from bebop and became known as hard bop. There are obviously many glaring omissions, but hopefully some pleasant surprises too.
Below are links to the complete list. Take a look and let me know the artists and albums you think I've missed.
Fantastic list – and I too could live without pretty much anything done since Trane passed on.
Don't think I'd include Elmo or Harold Land though I relish their recordings with Clifford and may have to admit Elmo for his brilliant compositions. I like Blue Mitchell alright but would not include him (or Booker Little) among the 50 greats -- not in a league with your other trumpeters or the conspicuously absent Lee Morgan, Dizzy and Roy Eldridge. Would even subordinate him to Donald Byrd and Freddy Hubbard (from their Messenger days) and Fats Navarro.
Wes Montgomery (deservedly always # 1 in polls of professional guitarists and their fans -- and you have NO guitarists); Lester Young (why Hawk but not Pres?); Milt Jackson? Jimmy Smith? Mingus? Django? Sarah is my favorite lady singer ever, but where’s Lady Day -- especially her early stuff?
As a jazz guitar player and devotee, I’d have to include Jim Hall (Sonny Rollins and Art Farmer did) and Kenny Burrell. Piano-wise, I’m partial to Tommy Flanagan and Hank Jones – maybe at the expense of John Hicks or the marvelous Wynton Kelly. No list is complete without a bassist or two – probably Ron Carter first, but Paul Chambers, Ray Brown and Percy Heath certainly get honorable mention. Does the list get more than one drummer? Hard to omit Max Roach and I have a thing for Kenny Clarke.
Great stuff, Andy. I’ve always wanted to do this.
This guy’s list is good ! One thing I look for in a list like this is who is left off. That tells me a lot about how deep & well studied the person is. I looked at a list the other day & both Johnny Griffin & Harold Land were not on it. And when I looked at this list those were the first I looked for. Now I know how opinionated you can be when it comes to your standards in players, but I think Harold Land was one of the most meticulous & gutsy improvisers to come out of the 50s if you listen closely to his line. And like all the other greats his sound is one that is drenched in the blues but it does not override his sound. His sound did change in the 60s. In some ways it kinda became a little more West Coast. But in the 50s his sound & line were text book. He was the most East Coast sounding player ever to come from the West Coast in the 50s. I really think you should reinvestigate this guy because he was one of the true masters of the 50s & even still in the very early 60s (1961). I did not truly come to appreciate Mr. Land’s sound & approach until about 20 years ago.
As far as The Little Giant Johnny Griffin by my standard his playing between 1956 & 57 was 2nd to no one, including Sonny & Trane. But by 1958 Trane had surpassed everyone, even if his sound was not liked by many.
A lot of people forget about Benny Golson but by the early-mid 60s was sounding like a Trane from the mid 50s but in his own way, which he still sounds like to this day ! And it is a beautiful sound. Another true master of the horn.
Thanks, Jimmy and Gil, for your comments and for taking the time to look this over. You should know that I didn't make a list of 50 at the outset; my various choices sprung more haphazardly from one to the next. As a result, there are definitely some big omissions, many of whom Jimmy points out.
|
#!/usr/bin/python3.5
#-*- coding: utf-8 -*-
# Automação de tarefas em sistemas unix
# Non Comercial Purposes License
# NÂO RETIRAR OS CREDITOS!
# Script Open Source | Autor: d3str0 | Telegram: @phoenix_burning
# Veja mais dos meus scripts no github: https://github.com/d3str0h4x
import time
import os
import subprocess
import sys
import socket
banner = '''
███████╗██╗ ██╗███████╗██╗ ██╗ ██████╗ ██╗ ██╗███╗ ██╗███████╗██████╗
██╔════╝██║ ██║██╔════╝██║ ██║ ██╔═████╗██║ ██║████╗ ██║██╔════╝██╔══██╗
███████╗███████║█████╗ ██║ ██║ ██║██╔██║██║ █╗ ██║██╔██╗ ██║█████╗ ██║ ██║
╚════██║██╔══██║██╔══╝ ██║ ██║ ████╔╝██║██║███╗██║██║╚██╗██║██╔══╝ ██║ ██║
███████║██║ ██║███████╗███████╗███████╗╚██████╔╝╚███╔███╔╝██║ ╚████║███████╗██████╔╝
╚══════╝╚═╝ ╚═╝╚══════╝╚══════╝╚══════╝ ╚═════╝ ╚══╝╚══╝ ╚═╝ ╚═══╝╚══════╝╚═════╝
'''
skull = ''' .xx"""" """$$$$be.
-" ^""**$$$e.
." ENJOY!! '$$$c
/ Coded by d3str0 "4$$b
d 3 $$$$
$ * .$$$$$$
.$ ^c $$$$$e$$$$$$$$.
d$L 4. 4$$$$$$$$$$$$$$b
$$$$b ^ceeeee. 4$$ECL.F*$$$$$$$
e$""=. $$$$P d$$$$F $ $$$$$$$$$- $$$$$$
z$$b. ^c 3$$$F "$$$$b $"$$$$$$$ $$$$*" .=""$c
4$$$$L $$P" "$$b .$ $$$$$...e$$ .= e$$$.
^*$$$$$c %.. *c .. $$ 3$$$$$$$$$$eF zP d$$$$$
"**$$$ec " %ce"" $$$ $$$$$$$$$$* .r" =$$$$P""
"*$b. "c *$e. *** d$$$$$"L$$ .d" e$$***"
^*$$c ^$c $$$ 4J$$$$$% $$$ .e*".eeP"
"$$$$$$"'$=e....$*$$**$cz$$" "..d$*"
"*$$$ *=%4.$ L L$ P3$$$F $$$P"
"$ "%*ebJLzb$e$$$$$b $P"
%.. 4$$$$$$$$$$ "
$$$e z$$$$$$$$$$%
"*$c "$$$$$$$P"
."""*$$$$$$$$bc
.-" .$***$$$"""*e.
.-" .e$" "*$c ^*b.
.=*"""" .e$*" "*bc "*$e..
.$" .z*" ^*$e. "*****e.
$$ee$c .d" "*$. 3.
^*$E")$..$" * .ee==d%
$.d$$$* * J$$$e*
""""" "$$$"
'''
green = '\033[1;32m'
blue = '\033[34m'
purple = '\033[35m'
red = '\033[31m'
options = '''
1 Metasploit
2 Neofetch
3 Editar repositorios
4 Atualizar sistema
1337 INFO
3301 HELP
99 sair'''
sep = "-"
info = '''
Projeto: Shellowned
Autor: d3str0
Telegram: @phoenix_burning
Veja mais dos meus scripts no github: https://github.com/d3str0h4x'''
description = "Automação_de_comandos unix\n"
help = (green+"type '3301' for tutorial")
time.sleep(2)
def init():
os.system('clear')
print(green+banner)
time.sleep(3)
print(red+skull)
time.sleep(2)
os.system('clear')
print(purple+description)
print(red+'---------------------------------------')
print(blue+help)
print(red+'---------------------------------------')
while True:
print(blue+options)
y = input(purple+'Escolha uma opção: ')
if y == '1':
os.system('clear')
print(green+'Executando Metasploit Framework..')
os.system('sudo msfconsole')
elif y == '2':
os.system('clear')
os.system('neofetch')
elif y == '3':
print(green+'Abrindo lista de repositorios..')
time.sleep(2)
os.system('sudo nano /etc/apt/sources.list')
elif y == '4':
print(green+'Atualizando seu sistema..')
os.system('sudo apt-get update -y && sudo apt-get upgrade -y')
elif y == '1337':
print(green+sep+info)
elif y == '3301':
os.system('clear')
print('Escolha uma opção inserindo os numeros correspodentes')
print(green+"Para sair digite '99'")
elif y == '99':
break
os.system('exit')
else:
print(red+'invalid option')
init()
|
16/17 SEASON CARDS | Over 3,000 fans have signed up, have you?
There is still time for you to get your Season Card for the new season!
Season Cards can be bought online, at the ticket office at Roots Hall or by calling the Blues Box Office on 08444770077.
Southend United supporters have backed the Blues in their numbers ahead of the new season with over 3,000 2016/17 Season Cards sold.
The Club have FROZEN prices for 2016/17 Season Cards and many Blues fans have been quick to secure their seats and sign up to the most cost effective way to follow their team.
15/16 Season Card Holders had until Thursday 30 June to renew their current seats for the new season before they were released for general sale. Those who have not renewed their seat yet, can still do so subject to availability.
With just over a month until the new season kicks off, Blues are already at 87% of their entire Season Card Sales for 15/16.
After a strong first season back in League One the Club’s objective next season will be Championship football and we would like to reward fans for their continued loyalty and support.
|
# Copyright 2011 James McCauley
#
# This file is part of POX.
#
# POX is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# POX is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with POX. If not, see <http://www.gnu.org/licenses/>.
"""
Various utility functions
"""
import traceback
import struct
import sys
import os
import time
import socket
#FIXME: ugh, why can't I make importing pox.core work here?
import logging
log = logging.getLogger("util")
class DirtyList (list):
#TODO: right now the callback may be called more often than needed
# and it may not be called with good names/parameters.
# All you can really rely on is that it will be called in
# some way if something may have changed.
def __init__ (self, *args, **kw):
list.__init__(self, *args, **kw)
self.dirty = False
self.callback = None
def __setslice__ (self, k, v):
#TODO: actually check for change
self._smudge('__setslice__', k, v)
list.__setslice__(self, k, v)
def __delslice__ (self, k):
#TODO: actually check for change
self._smudge('__delslice__', k, None)
list.__delslice__(self, k)
def append (self, v):
self._smudge('append', None, v)
list.append(self, v)
def extend (self, v):
self._smudge('extend', None, v)
list.extend(self, v)
def insert (self, i, v):
self._smudge('insert', k, v)
list.extend(self, v)
def pop (self, i=-1):
self._smudge('pop', i, None)
list.pop(self, i)
def remove (self, v):
if v in self:
self._smudge('remove', None, v)
list.remove(self, v)
def reverse (self):
if len(self):
self._smudge('reverse', None, None)
list.reverse(self)
def sort (self, *arg, **kw):
#TODO: check for changes?
self._smudge('sort', None, None)
list.sort(self, *arg, **kw)
def __setitem__ (self, k, v):
if isinstance(k, slice):
#TODO: actually check for change
self._smudge('__setitem__slice',k,v)
elif self[k] != v:
self._smudge('__setitem__',k,v)
list.__setitem__(self, k, v)
assert good
def __delitem__ (self, k):
list.__delitem__(self, k)
if isinstance(k, slice):
#TODO: actually check for change
self._smudge('__delitem__slice',k,v)
else:
self._smudge('__delitem__', k, None)
def _smudge (self, reason, k, v):
if self.callback:
if self.callback(reason, k, v) is not True:
self.dirty = True
else:
self.dirty = True
class DirtyDict (dict):
"""
A dict that tracks whether values have been changed shallowly.
If you set a callback, it will be called when the value changes, and
passed three values: "add"/"modify"/"delete", key, value
"""
def __init__ (self, *args, **kw):
dict.__init__(self, *args, **kw)
self.dirty = False
self.callback = None
def _smudge (self, reason, k, v):
if self.callback:
if self.callback(reason, k, v) is not True:
self.dirty = True
else:
self.dirty = True
def __setitem__ (self, k, v):
if k not in self:
self._smudge('__setitem__add',k,v)
elif self[k] != v:
self._smudge('__setitem__modify',k,v)
dict.__setitem__(self, k, v)
def __delitem__ (self, k):
self._smudge('__delitem__', k, None)
dict.__delitem__(self, k)
def set_extend (l, index, item, emptyValue = None):
"""
Adds item to the list l at position index. If index is beyond the end
of the list, it will pad the list out until it's large enough, using
emptyValue for the new entries.
"""
if index >= len(l):
l += ([emptyValue] * (index - len(self) + 1))
l[index] = item
def strToDPID (s):
"""
Convert a DPID in the canonical string form into a long int.
"""
s = s.replace("-", "").split("|", 2)
a = int(s[0], 16)
b = 0
if len(s) == 2:
b = int(s[1])
return a | (b << 48)
def dpidToStr (dpid, alwaysLong = False):
"""
Convert a DPID from a long into into the canonical string form.
"""
""" In flux. """
if type(dpid) is long or type(dpid) is int:
# Not sure if this is right
dpid = struct.pack('!Q', dpid)
assert len(dpid) == 8
r = '-'.join(['%02x' % (ord(x),) for x in dpid[2:]])
if alwaysLong or dpid[0:2] != (b'\x00'*2):
r += '|' + str(struct.unpack('!H', dpid[0:2])[0])
return r
def assert_type(name, obj, types, none_ok=True):
"""
Assert that a parameter is of a given type.
Raise an Assertion Error with a descriptive error msg if not.
name: name of the parameter for error messages
obj: parameter value to be checked
types: type or list or tuple of types that is acceptable
none_ok: whether 'None' is an ok value
"""
if obj is None:
if none_ok:
return True
else:
raise AssertionError("%s may not be None" % name)
if not isinstance(types, (tuple, list)):
types = [ types ]
for cls in types:
if isinstance(obj, cls):
return True
allowed_types = "|".join(map(lambda x: str(x), types))
stack = traceback.extract_stack()
stack_msg = "Function call %s() in %s:%d" % (stack[-2][2], stack[-3][0], stack[-3][1])
type_msg = "%s must be instance of %s (but is %s)" % (name, allowed_types , str(type(obj)))
raise AssertionError(stack_msg + ": " + type_msg)
def initHelper (obj, kw):
"""
Inside a class's __init__, this will copy keyword arguments to fields
of the same name. See libopenflow for an example.
"""
for k,v in kw.iteritems():
if not hasattr(obj, k):
raise TypeError(obj.__class__.__name__ + " constructor got "
+ "unexpected keyword argument '" + k + "'")
setattr(obj, k, v)
def makePinger ():
"""
A pinger is basically a thing to let you wake a select().
On Unix systems, this makes a pipe pair. But on Windows, select() only
works with sockets, so it makes a pair of connected sockets.
"""
class PipePinger (object):
def __init__ (self, pair):
self._w = pair[1]
self._r = pair[0]
assert os is not None
def ping (self):
if os is None: return #TODO: Is there a better fix for this?
os.write(self._w, ' ')
def fileno (self):
return self._r
def pongAll (self):
#TODO: make this actually read all
os.read(self._r, 1024)
def pong (self):
os.read(self._r, 1)
def __del__ (self):
try:
os.close(self._w)
except:
pass
try:
os.close(self._r)
except:
pass
class SocketPinger (object):
def __init__ (self, pair):
self._w = pair[1]
self._r = pair[0]
def ping (self):
self._w.send(' ')
def pong (self):
self._r.recv(1)
def pongAll (self):
#TODO: make this actually read all
self._r.recv(1024)
def fileno (self):
return self._r.fileno()
#return PipePinger((os.pipe()[0],os.pipe()[1])) # To test failure case
if os.name == "posix":
return PipePinger(os.pipe())
#TODO: clean up sockets?
localaddress = '127.127.127.127'
startPort = 10000
import socket
import select
def tryConnect ():
l = socket.socket()
l.setblocking(0)
port = startPort
while True:
try:
l.bind( (localaddress, port) )
break
except:
port += 1
if port - startPort > 1000:
raise RuntimeError("Could not find a free socket")
l.listen(0)
r = socket.socket()
try:
r.connect((localaddress, port))
except:
import traceback
ei = sys.exc_info()
ei = traceback.format_exception_only(ei[0], ei[1])
ei = ''.join(ei).strip()
log.warning("makePinger: connect exception:\n" + ei)
return False
rlist, wlist,elist = select.select([l], [], [l], 2)
if len(elist):
log.warning("makePinger: socket error in select()")
return False
if len(rlist) == 0:
log.warning("makePinger: socket didn't connect")
return False
try:
w, addr = l.accept()
except:
return False
#w.setblocking(0)
if addr != r.getsockname():
log.info("makePinger: pair didn't connect to each other!")
return False
r.setblocking(1)
# Turn off Nagle
r.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
w.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
return (r, w)
# Try a few times
for i in range(0, 3):
result = tryConnect()
if result is not False:
return SocketPinger(result)
raise RuntimeError("Could not allocate a local socket pair")
def str_to_bool (s):
"""
Given a string, parses out whether it is meant to be True or not
"""
s = str(s).lower() # Make sure
if s in ['true', 't', 'yes', 'y', 'on', 'enable', 'enabled', 'ok',
'okay', '1', 'allow', 'allowed']:
return True
try:
r = 10
if s.startswith("0x"):
s = s[2:]
r = 16
i = int(s, r)
if i != 0:
return True
except:
pass
return False
def hexdump (data):
if isinstance(data, str):
data = [ord(c) for c in data]
o = ""
def chunks (data, length):
return (data[i:i+length] for i in xrange(0, len(data), length))
def filt (c):
if c >= 32 and c <= 126: return chr(c)
return '.'
for i,chunk in enumerate(chunks(data,16)):
if i > 0: o += "\n"
o += "%04x: " % (i * 16,)
l = ' '.join("%02x" % (c,) for c in chunk)
l = "%-48s" % (l,)
l = l[:3*8-1] + " " + l[3*8:]
t = ''.join([filt(x) for x in chunk])
l += ' %-16s' % (t,)
o += l
return o
def connect_socket_with_backoff(address, port, max_backoff_seconds=32):
'''
Connect to the given address and port. If the connection attempt fails,
exponentially back off, up to the max backoff
return the connected socket, or raise an exception if the connection was unsuccessful
'''
backoff_seconds = 1
sock = None
print >>sys.stderr, "connect_socket_with_backoff(address=%s, port=%d)" % (address, port)
while True:
try:
sock = socket.socket()
sock.connect( (address, port) )
break
except socket.error as e:
print >>sys.stderr, "%s. Backing off %d seconds ..." % (str(e), backoff_seconds)
if backoff_seconds >= max_backoff_seconds:
raise RuntimeError("Could not connect to controller %s:%d" % (address, port))
else:
time.sleep(backoff_seconds)
backoff_seconds <<= 1
return sock
if __name__ == "__main__":
def cb (t,k,v): print v
l = DirtyList([10,20,30,40,50])
l.callback = cb
l.append(3)
print l
|
If you wish to go easy with your GST filing, you need to follow the rulebook!
With the GST pattern running successfully in the awe striking continent of Australia, businesses are no longer troubled with filing the separate returns for excise, VAT, sales tax and service tax. This has not just saved their troubled lives but has also helped with the fostering business premises.
If you are an entrepreneur running your own business or some or the other way directly involved with running a business entity, it’s very important to understand the GST return filing system and also hire a professional GST return services offering agency for an easier flow.
Herein, we are talking about a few prime things to know and consider while filing the GST returns in this continent country!
Before that, one needs to first understand what the GST return contains!
All the GST returns can be filed via different mediums; however, the online medium under the guidance of a professional service provider seems to be the best choice.
The GST return mostly comprises of a taxable amount that implies on the sales of goods and services. However, there are a few exemptions that fall under the GST non-taxable picks.
Exports of the goods and services from Australia are generally GST-free.
Broadly, the supply of a service remains GST-free if the recipient of the service remains outside the Australian province.
However, there are particular set of rules and protocols to follow for determining the goods or services under the GST- free account.
GST has turned to be the biggest ease for the existing tax burden, compliances and return filing system. With a number of agencies offering tax filing and accounting services in Australia, things are even expected to go towards higher ease and comfort.
|
import csv
import glob
import re
import os
import cPickle as pickle
from featureExtractors.AbsoluteCellCountOriginal import AbsoluteCellCountOriginal
from featureExtractors.RelativeCellCountOriginal import RelativeCellCountOriginal
from featureExtractors.AbsoluteCellCountAlt import AbsoluteCellCountAlt
from featureExtractors.RelativeCellCountAlt import RelativeCellCountAlt
from featureExtractors.BasicInfo import BasicInfo
from featureExtractors.DistanceAlt import DistanceAlt
from featureExtractors.DistanceOriginal import DistanceOriginal
from featureExtractors.MutProbability import MutProbability
from featureExtractors.Lifetime import Lifetime
from featureExtractors.SizeOnAxis import SizeOnAxis
from featureExtractors.RelHeight import RelHeight
from featureExtractors.MuscleLocation import MuscleLocation
from featureExtractors.Symmetry import Symmetry
from featureExtractors.Arc import Arc
from featureExtractors.Monotony import Monotony
from featureExtractors.Gait import Gait
from featureExtractors.ShapeComplexity import ShapeComplexity
from featureExtractors.CompositionComplexity import CompositionComplexity
from helpers.config import PathConfig
__author__ = 'meta'
docString = """ DataCollector 2 main script (rewrite of the original)
This script can be run standalone with 2 optional command line parameters:
[output file name] - (string, default: 'data.csv'), this defines the filename of the CSV output that this script generates
[search pattern] - (string, default: '../EC14-Exp-*'), this defines what folders are searched. Can also be set to "null" to use the default
[limit] - (integer, default: no limit) max number of individuals to get for each experiment
[continue] - (string, default: false) if this is "continue" or "true", then the data collection will not repeat completed experiments
"""
class DataCollector2:
def __init__(self, pattern, outputFile, limit, cont):
if not pattern:
self.pattern = '../EC14-Exp-*'
else:
self.pattern = pattern
if not outputFile:
self.outputFile = 'data.csv'
else:
self.outputFile = outputFile
if not limit:
self.limit = 99999
else:
self.limit = int(limit)
if not cont:
self.cont = False
else:
self.cont = True
print "Using the following parmeters:\n" \
"pattern: {pattern}\n" \
"output file: {outfile}\n" \
"limit: {limit}\n" \
"continue: {cont}".format(
pattern=self.pattern,
outfile=self.outputFile,
limit=self.limit,
cont=self.cont
)
self.experimentsDone = []
self.rowCount = 0
self.headers = []
self.headersWritten = False
self.writer = False
self.outputFileHandle = False
self.previousPercentDone = 0
self.expNumberRegex = re.compile('([0-9]+)$')
self.featureExtractors = [
BasicInfo(),
MutProbability(),
Lifetime(),
DistanceOriginal(),
DistanceAlt(),
AbsoluteCellCountOriginal(),
RelativeCellCountOriginal(),
AbsoluteCellCountAlt(),
RelativeCellCountAlt(),
SizeOnAxis(),
RelHeight(),
MuscleLocation(),
Symmetry(),
Arc(),
Monotony(),
Gait(),
ShapeComplexity(),
CompositionComplexity()
]
self.pickleLocation = os.path.dirname(
os.path.realpath(__file__)) + os.path.sep + ".datacollector2-progress.pickle"
def getExperiments(self):
expFolders = glob.glob(self.pattern)
output = [(self.getExpNumber(os.path.basename(expFolder)),
os.path.basename(expFolder),
expFolder) for expFolder in expFolders if os.path.isdir(expFolder)]
return output
def getExpNumber(self, haystack):
m = self.expNumberRegex.search(haystack)
if m is not None:
return m.group(1)
else:
return haystack
def collectData(self):
experiments = self.getExperiments()
print "I found the following experiments: \n", [exp[0] for exp in experiments]
if self.cont:
experiments = self.filterExperimentsIfContinue(experiments)
print "Because the 'continue' flag was set, I will only parse the following\n" \
" experiments (because I think I already did the other ones before):\n", \
[exp[0] for exp in experiments]
for exp in experiments:
type = self.getType(exp)
# print exp[0],type
individuals = self.getIndividuals(exp)
print "parsing experiment {exp} (type: {type}) with {indivs} individuals".format(
exp=exp[0],
type=type,
indivs=len(individuals)
)
count = 0
for indiv in individuals[:self.limit]:
features = self.getFeatures(exp, type, indiv)
self.writeFeatures(features)
count += 1
self.printExperimentProgress(len(individuals), count)
self.saveProgress(exp)
self.closeFile()
print "wrote {} lines to {}".format(self.rowCount, self.outputFile)
def saveProgress(self, experiment):
self.experimentsDone.append(experiment)
if os.path.isfile(self.pickleLocation):
os.remove(self.pickleLocation)
pickle.dump(self.experimentsDone, open(self.pickleLocation, "wb"))
def loadProgress(self):
self.experimentsDone = pickle.load(open(self.pickleLocation, "rb"))
def filterExperimentsIfContinue(self, experiments):
self.loadProgress()
out = [experiment for experiment in experiments if experiment not in self.experimentsDone]
return out
def getIndividuals(self, experiment):
indivs = glob.glob(experiment[2] + os.path.sep + PathConfig.populationFolderNormal + os.path.sep + "*.vxa")
output = [(os.path.basename(indiv).split("_")[0], indiv) for indiv in indivs]
output.sort(key=lambda x: int(x[0]))
return output
def getType(self, experiment):
# if the alternative population DOES have a disease then the main experiment DIDN'T have a disease
if self.hasAltPopWithDisease(experiment):
if not self.hasAltPopWithoutDisease(experiment):
return "with disease"
else:
self.errorHasBothPopFiles(experiment)
# if the alternative population DOESN'T have a disease then the main experiment DID have a disease
if self.hasAltPopWithoutDisease(experiment):
if not self.hasAltPopWithDisease(experiment):
return "no disease"
else:
self.errorHasBothPopFiles(experiment)
# if neither is the case, then there are no population files for this experiment... abort
self.errorHasNoPop(experiment)
def hasAltPopWithoutDisease(self, experiment):
return self.hasAltPop(experiment, "no disease")
def hasAltPopWithDisease(self, experiment):
return self.hasAltPop(experiment, "with disease")
def hasAltPop(self, experiment, condition):
altPopPath = experiment[2] + os.path.sep + PathConfig.populationFoldersAlt[condition]
if not os.path.isdir(altPopPath):
return False
if len(os.listdir(altPopPath)) > 0:
return True
return False
def getFeatures(self, experiment, type, indiv):
output = []
for feature in self.featureExtractors:
output += feature.extract(experiment, type, indiv)
return output
def printExperimentProgress(self, total, current):
percentDone = round(100 * current * 1.0 / total)
if percentDone != self.previousPercentDone:
sys.stdout.write('{}% done\r'.format(int(percentDone)))
sys.stdout.flush()
self.previousPercentDone = percentDone
def writeFeatures(self, features):
if not self.headersWritten:
self.headers = self.getFeatureHeader()
writeOption = "wb"
if self.cont:
writeOption = "ab"
self.outputFileHandle = open(self.outputFile, writeOption)
self.writer = csv.DictWriter(self.outputFileHandle, fieldnames=self.headers)
if not self.cont:
self.writer.writeheader()
self.headersWritten = True
self.rowCount += 1
rowDict = dict(zip(self.headers, features))
self.writer.writerow(rowDict)
def closeFile(self):
if not not self.outputFileHandle:
self.outputFileHandle.close()
def getFeatureHeader(self):
output = []
for feature in self.featureExtractors:
output += feature.getCSVheader()
return output
@staticmethod
def errorHasBothPopFiles(experiment):
print "ERROR: this shouldn't happen - an experiment has alternative population files " \
"both WITH and WITHOUT disease in addition to the normal experiment traces:"
print experiment
print "...Please fix this before continuing. Exiting."
quit()
@staticmethod
def errorHasNoPop(experiment):
print "ERROR: the following experiment has no alternative population files (neither with disease nor without):"
print experiment
print "...Please fix this before continuing. Exiting."
quit()
if __name__ == "__main__":
import sys
if len(sys.argv) == 1:
print docString
quit()
pattern = False
outputFile = False
limit = False
con = False
if len(sys.argv) >= 2:
outputFile = sys.argv[1]
if len(sys.argv) >= 3:
pattern = sys.argv[2]
if pattern.lower() == "null" or pattern.lower() == "false":
pattern = False
if len(sys.argv) >= 4:
limit = sys.argv[3]
if len(sys.argv) == 5:
cont = sys.argv[4]
if cont.lower() in ["cont", "continue", "c", "true", "y"]:
con = True
else:
con = False
dataCol = DataCollector2(pattern, outputFile, limit, con)
dataCol.collectData()
|
Recent research has shown that the triggering receptor expressed on myeloid cells 2 (TREM2) in microglia is closely related to the pathogenesis of Alzheimer’s disease (AD). The mechanism of this relationship, however, remains unclear. TREM2 is part of the TREM family of receptors, which are expressed primarily in myeloid cells, including monocytes, dendritic cells, and microglia. The TREM family members are cell surface glycoproteins with an immunoglobulin-like extracellular domain, a transmembrane region and a short cytoplasmic tail region. The present article reviews the following: (1) the structure, function, and variant site analysis of the Trem2 gene; (2) the metabolism of TREM2 in peripheral blood and cerebrospinal fluid; and (3) the possible underlying mechanism by which TREM2 regulates innate immunity and participates in AD.
Alzheimer’s disease (AD) is the most common age-related neurodegenerative disease. The early symptoms of AD are short-term memory loss and disorientation, followed by progressive memory loss and irreversible cognitive decline. As AD progresses, severe clinical neuropsychiatric symptoms appear, and the patients can no longer take care of themselves. On average, a person with Alzheimer’s lives 4 to 8 years after diagnosis, but patients can live as long as 20 years, depending on other factors. AD is characterized by an abnormal aggregation of β-amyloid (Aβ) peptides and neuronal neurofibrillary tangles (NFTs) derived from hyperphosphorylated tau (p-tau).
Currently, approximately 47 million people live with dementia worldwide, and that number will increase to more than 131 million by 2050 . The global costs of AD will increase to $1 trillion by 2018. Therefore, AD has become an urgent health problem around the world .
Innate immunity is a type of non-targeted defense mechanism . When a living organism makes contact with the external environment, such as with viruses, germs, or other pathogenic microorganisms, innate immunity can protect and keep our bodies healthy. Innate immunity has been gradually established during the long-term process of evolution. Over the past few years, genetic research has found some new pathogenic factors associated with AD . In the analysis of these factors, the innate immune system has attracted a great deal of attention, especially regarding the function of microglia .
Microglia are macrophages in the brain and spinal cord and act as the first line of immune defense in the central nervous system (CNS). Microglia participate in the identification of pathogens and activate the innate immune response, which is of major importance in the brain . Based on previous reports, we know that mouse microglial cells exhibit a chemotactic response to β-amyloid 1-42 (Aβ42). Furthermore, mouse homolog formyl peptide receptor 2 (mFPR2) enhances Aβ42 internalization when the microglia are stimulated by Toll-like receptors (TLRs) . The activation of TLRs promotes the ability of microglia to digest and process Aβ42. In the pathologic process of AD, the clearance of microglial cells may be dynamic [6, 7]. Triggering receptor expressed on myeloid cells 2 (TREM2) is expressed in microglia. Studies [8, 9] have shown that certain TREM2 variants have an important effect on AD, and that effect is similar to that of apolipoprotein E (ApoE). They are all the risk factors of AD. TREM2 and ApoE ε4 may interact synergistically in the preclinical stage of AD .
The Trem2 gene is located on human chromosome 6, from 41,126,246 bp to 41,130,922 bp, with a total length of 4676 bp. TREM2 consists of five exons that can encode a 230 amino acid protein (Fig. 1) . TREM2 belongs to the TREM family of receptors, which are expressed in a variety of myeloid cells. TREM2 is mainly expressed in monocytes, macrophages, dendritic cells, and microglia. Members of the TREM family are cell surface glycoproteins with immunoglobulin-like extracellular domains, transmembrane regions, and short cytoplasmic tails. In the brain, TREM2 is involved in regulating the inflammatory responses of microglia and phagocytosis of cellular debris.
A decade ago, TREM2 has been founded as a phagocytic receptor for bacteria . In neural cells, TREM2 signaling is completely dependent on the adapter protein, DNAX-activation protein 12 (DAP12, also known as TYROBP), because the major isoform of TREM2 has a short cytoplasmic tail. Since TREM2 lacks an additional cytoplasmic domain, TREM2 must signal via DAP12, which contains an immunoreceptor tyrosine-based activation motif (ITAM) . This cooperation is absolutely necessary for effective phagocytosis.
While studies have suggested that TREM2 can regulate the number of myeloid cells, the impact of this occurrence in AD remains unknown. TREM2 knockdown in primary microglia was found to reduce cell number , while crosslinking TREM2 promoted an increase in osteoclast number in cell cultures . It has been confirmed that TREM2 can increase the number of myeloid cells in the context of inflammation or disease. Recent studies have revealed that myeloid cell accumulation around amyloid plaques was reduced in TREM2 hemizygous [16, 17] and DAP12-deficient AD mouse models.
Enhanced phagocytosis is an important function of TREM2. TREM2 is expressed in myeloid cells in the CNS, which have high phagocytic activity . Both in vitro and in vivo studies have shown that a loss of TREM2 function results in reduced phagocytosis and β-amyloid 1-42 (Aβ42) uptake . In contrast, TREM2 overexpression by lentivirus vector system could enhance the clearance of apoptotic neurons .
Studies have shown that a few rare variants of TREM2 are considered to be associated with susceptibility to AD [22, 23]. Research on TREM2 missense mutations in European populations revealed that variants, such as L211P , H157Y , R136Q , T96K , D87N , T66M , R62H , R47H , and Q33X (Fig. 2), have been found to be associated with AD. However, studies of non-European populations have shown different results. In our research on the Chinese population, the R47H missense variant was very rare, and another missense variant, G115S, was found to be related to AD .
Many studies reported that the R47H variant of TREM2 is associated with the risk of AD [9, 27, 31–33]. Lill and her colleagues reported that the rs75932628 variant of TREM2 significantly increased the level of CSF-total-tau but not Aβ42 in a European population and suggested that the role of TREM2 in AD may involve tau dysfunction . However, as shown in previous studies, the rs75932628 variant of TREM2 was not detected in either Chinese or Korean populations [30, 35–37]. These results suggest that TREM2 is differentially associated with the incidence of AD in varying ethnicities, which may be related to the genetic backgrounds of different races.
The R47H variant of TREM2 increases terminal glycosylation of complex oligosaccharides in the Golgi apparatus and reduces TREM2’s solubility. This may affect the binding of DAP12 to TREM2, which would, in turn, affect the function of the receptor . Meanwhile, the R47H variant has been presumed to destroy the stability of the TREM2 protein . On the basis of crystalline structural analysis, another explanation suggested that AD risk variant R47H might impact binding to a cell-surface ligand (TREM2-L) and slightly impact the stability and structure of TREM2 . However, a contrary result showed that transfected R47H-TREM2 constructs have an increased half-life relative to wild-type TREM2 and can resist proteasome degradation in the endoplasmic reticulum (ER) . These statements are possible, which depend on the specific activity of life. The tyrosine-38 and threonine-66 residues of TREM2 are essential for the glycosylation of the protein. The Y38C and T66M variants of TREM2 may cause some significant differences in the glycosylation patterns and damage during transport to the plasma membrane. In the previous studies, it was found that the TREM2 R47H variant had a slight difference in N-glycosylation of the complex oligosaccharide compared to the Y38C and T66M variants, which are associated with Nasu-Hakola disease (NHD) . This difference causes NHD to be an early-onset disease and AD to be a late-onset disease.
Data have shown that TREM2 plays a vital role in the cognitive function of the brain. An important function of TREM2 is its regulation of phagocytosis in microglia. Microglial removal of damaged cells, organic matrix molecules, and biomacromolecules must be assisted by the TREM2-DAP12 receptor complex. As a glial cell immunoreceptor, TREM2 has been found to modulate microglia-mediated inflammatory responses . A decade ago, Gordon described the mechanism of two opposite types of macrophage activation , but now, the type M1 and M2 are widely used to define classically (proinflammatory) and alternatively activated (anti-inflammatory) microglia, which is controversial . Outside the CNS, the mononuclear phagocyte system has been divided into M1 phenotype and M2 phenotype. Study shows that in the CNS, because that microglial activation is heterogeneous, the microglia can also be categorized into two opposite types: M1 phenotype and M2 phenotype . Therefore, we conjecture that in the brain, microglia have two opposite roles, proinflammatory (M1, cytotoxic) and anti-inflammatory (M2, neuroprotective). TREM2 inhibits neurotransmitters by blocking M2 microglia. This may reveal the potential mechanism by which TREM2 inhibits microglial inflammatory responses . The microglial cells participate in the removal of Aβ aggregates through phagocytosis. Previous study found that TREM2 overexpression by intracerebral lentiviral particle injection significantly reduced soluble and insoluble Aβ42 aggregates in the brain. In middle-aged APPswe/PS1ΔE9 mice (7–8 months old), the ability of microglia to remove amyloid plaques increased after TREM2 overexpression, and the density of amyloid plaques in the brain decreased . However, in a mouse model of TREM2 defects, the concentration of amyloid plaques in the brain did not change . Interestingly, after the expression of TREM2 in 18-month-old APPswe/PS1ΔE9 mice, the concentration of amyloid plaques was not attenuated, and no alterations in the levels of Aβ42 were observed in the brain . Research utilizing mouse models has shown that the overexpression of TREM2 plays a protective role in both early- and mid-term AD, whereas this protective effect is lost in late-term AD . We speculate that the reduced number of microglia in the brains of older mice may lead to a decline in phagocytosis.
The high level of phosphorylation and abnormal aggregation of tau protein are pathophysiological factors associated with neuronal and synaptic damage. The loss of neurons and synapses in the hippocampus is associated with a decrease in spatial cognitive function. The 7-month-old P301S mouse model has been shown to exhibit significant neuronal and synaptic damage to this region. Overexpression of TREM2 is effective in inhibiting these lesions; water maze experiments have demonstrated that TREM2 overexpression can restore spatial cognitive impairment in mice . In addition, the overexpression of TREM2 by intracerebral lentiviral particles injection has been found to significantly improve hyperphosphorylation of tau proteins and reduce the activity of cyclin-dependent kinase 5 (CDK5) and glycogen synthase kinase-3β (GSK3β) . Thus, TREM2 overexpression significantly reduces neuronal loss and may play a role in the phosphorylation of tau protein, thereby reducing the incidence of AD.
A recent study showed that TREM2 releases its extracellular domain after protease cleavage, leaving only the carboxy-terminal fragment (CTF) attached to the membrane . Soluble TREM2 (sTREM2) may be produced by proteolytic cleavage and alternative splicing. If insertions or frameshifts occur in exon 4, it can terminate the transmembrane domain, which is speculated to yield a soluble product. In addition to the membrane-bound form, sTREM2 has been detected in the supernatants of human and mouse cell cultures and in the peripheral blood and cerebrospinal fluid (CSF) . The sTREM2 in human peripheral blood and CSF can be used as a more accurate tool for understanding the biological effects of TREM2 in the pathogenesis of AD. Hu et al. analyzed the expression of TREM2 mRNA and protein in the peripheral blood in a population of Northern Han Chinese . The results showed that on the level of mRNA and protein, TREM2 expression were higher on monocytes, granulocytes, and in plasma in AD group compared with that of control groups. Mori et al. performed a similar analysis of TREM2 expression in the peripheral blood in a small population of Japanese individuals (26 patients with AD, 8 males and 18 females) . However, another study mentioned that the absolute level of TREM2 expression in human peripheral blood monocytes is quite low and unlikely to be useful for drawing mechanistic conclusions about TREM2 . The upregulation of TREM2 in the peripheral blood indicates that the gene is abnormally active in the development of AD pathology. More experiments are needed to confirm whether TREM2 was differently expressed in the peripheral blood in some populations. The level of sTREM2 in the CSF also exhibits changes. Although Kleinberge et al. showed that sTREM2 levels were reduced in the CSF of AD patients , other studies have shown that sTREM2 levels in the CSF increased with age and were positively correlated with the levels of Aβ42 and tau protein [26, 57–59].
While most people think that TREM2 exerts an anti-inflammatory effect, it seems that the connection between TREM2 and other inflammatory responses is not so simple. According to the cell type and context, the strength and duration of the stimuli is different. Therefore, TREM2 seems to play different roles in inflammatory responses.
Some in vitro and in vivo studies have shown that TREM2 plays an anti-inflammatory role in certain contexts. In cell lines, TREM2 deficiency increases the levels of proinflammatory mediators, such as tumor necrosis factor-α (TNFα), interleukin-1β (IL1β), and interleukin-6 (IL6) . TREM2 knockdown in the senescence-accelerated mouse P8 (SAMP8) mouse model also increased the production of inflammatory cytokines . Furthermore, overexpressing TREM2 in AD mouse models [20, 45] reduced the levels of proinflammatory transcripts. From these studies, we can speculate that TREM2 can inhibit inflammatory responses in some contexts.
However, many studies have supported that TREM2 can amplify or promote inflammatory responses. TREM2-deficient microglia have reduced activation and a more ramified morphology in cell cultures . In AD mouse models, TREM2-deficient microglia exhibit decreased cell size and surface area, as well as increased process length, resulting in reduced activation . sTREM2 activate the Akt–GSK3β–β-catenin pathway, which can suppress apoptosis in microglia . In this study , TREM2 promotes microglial survival by activating the Wnt/β-catenin signaling pathway. The upregulation of the Wnt/β-catenin pathway suppresses GSK3β, restores β-catenin signaling, and promotes TREM2-deficient microglial survival in vitro and in vivo. NF-κB signaling is associated with proinflammatory cytokines; the inhibition of NF-κB signaling markedly downregulated the production of three proinflammatory cytokines (IL-1β, IL-6, and TNF) . Taken together, these findings clearly supported that TREM2 can regulate inflammatory responses.
Aβ can destroy synaptic transmission, induce oxidative stress, and trigger cell death in vitro . Meanwhile, microglia have been shown to devour Aβ in the brain . Therefore, microglial phagocytosis of Aβ may serve a neuroprotective function. However, the absence of TREM2 significantly impairs the ability of microglia to engulf amyloid plaques. Some studies have reported that a TREM2-deficient AD mouse model results in a decrease in the number of microglia around amyloid plaques because metabolic fitness is reduced .
Condello et al. proposed a new hypothesis ; they postulated that the tight envelope of microglia around the amyloid surface constitutes a neuroprotective barrier that limits fibril outgrowth and plaque-associated toxicity. In AD mouse models, a lack of TREM2 or DAP12 results in more dispersed amyloid plaques and increased synapses, causing a morphology that resembles a sea urchin . The greater the number of synapses that protrude outside is, the larger the contacted surface with nerve structures and the greater the potential harm to the nervous system. Thus, in human brains, the protective role of microglia may primarily act as a barrier that isolates amyloid plaques from peripheral nerve tissues.
The relationships between TREM2 and TREM2 gene expression, function, mutation site analysis, and metabolism in peripheral blood and cerebrospinal fluid were reviewed in this paper. It is important to note that in the TREM family, the Trem2 gene plays an important role in the pathogenesis of AD. TREM2 can maintain the ability of microglia to recover neurons and engulf damaged neurons. However, some variants of this gene not only lead to changes in TREM2 expression levels but also impact the ability of TREM2 to bind to its ligand in microglia [55, 70]. Thus, these gene variants can influence the natural immune system. TREM2 mediates the neuroprotection in microglial cells by regulating the inflammatory responses and microglia survival (Fig. 3).
These results indicate that TREM2 may be a potential biomarker for AD diagnosis and treatment. In addition, TREM2 missense mutants have been found in many neurological immune deficiencies, indicating that TREM2 variants impact the immune function of the nervous system. Further research is needed to elucidate the biological role of TREM2 in the natural immune regulation of Alzheimer’s disease. Therefore, it is important to understand when, where, and how TREM2 plays a role in AD. This information could possibly provide new insights into immune function and immunotherapy, such that we could regulate this disease throughout its progression.
We would like to thank the Fundamental Research Funds for the Central University of China (2015JBM096) to YZ.
This work was supported by grants from the National Natural Science Foundation of China (81100809 and 81271417) and from the Beijing Natural Science Foundation (7152090) YZ.
JTL carried out the literature review, participated in the figure design, and drafted the manuscript. YZ supervised the study, and contributed to and finalized the draft. Both authors read and approved the final manuscript.
World Alzheimer Report 2015 [http://www.alz.co.uk/research/WorldAlzheimerReport2015.pdf].
Casati M, Ferri E, Gussago C, Mazzola P, Abbate C, Bellelli G, Mari D, Cesari M, Arosio B. Increased expression of TREM2 in peripheral cells from mild cognitive impairment patients that progress into Alzheimer’s disease. Eur J Neurol. 2018. https://doi.org/10.1111/ene.13583.
Kober DL, Alexander-Brett JM, Karch CM, Cruchaga C, Colonna M, Holtzman MJ, Brett TJ. Neurodegenerative disease mutations in TREM2 reveal a functional surface and distinct loss-of-function mechanisms. elife. 2016;5. https://doi.org/10.7554/eLife.20391.
|
"""
Class for IQ Data
TIQ format
Xaratustrah Aug-2015
"""
import os
import logging as log
import numpy as np
import xml.etree.ElementTree as et
from iqtools.iqbase import IQBase
class TIQData(IQBase):
def __init__(self, filename):
super().__init__(filename)
# Additional fields in this subclass
self.acq_bw = 0.0
self.rbw = 0.0
self.rf_att = 0.0
self.span = 0.0
self.scale = 0.0
self.header = ''
self.data_offset = 0
@property
def dictionary(self):
return {'center': self.center,
'nsamples_total': self.nsamples_total,
'fs': self.fs,
'nframes': self.nframes,
'lframes': self.lframes,
'data': self.data_array,
'nframes_tot': self.nframes_tot,
'DateTime': self.date_time,
'rf_att': self.rf_att,
'span': self.span,
'acq_bw': self.acq_bw,
'file_name': self.filename,
'rbw': self.rbw}
def __str__(self):
return \
'<font size="4" color="green">Record length:</font> {:.2e} <font size="4" color="green">[s]</font><br>'.format(
self.nsamples_total / self.fs) + '\n' + \
'<font size="4" color="green">No. Samples:</font> {} <br>'.format(self.nsamples_total) + '\n' + \
'<font size="4" color="green">Sampling rate:</font> {} <font size="4" color="green">[sps]</font><br>'.format(
self.fs) + '\n' + \
'<font size="4" color="green">Center freq.:</font> {} <font size="4" color="green">[Hz]</font><br>'.format(
self.center) + '\n' + \
'<font size="4" color="green">Span:</font> {} <font size="4" color="green">[Hz]</font><br>'.format(
self.span) + '\n' + \
'<font size="4" color="green">Acq. BW.:</font> {} <br>'.format(self.acq_bw) + '\n' + \
'<font size="4" color="green">RBW:</font> {} <br>'.format(self.rbw) + '\n' + \
'<font size="4" color="green">RF Att.:</font> {} <br>'.format(self.rf_att) + '\n' + \
'<font size="4" color="green">Date and Time:</font> {} <br>'.format(self.date_time) + '\n'
def read(self, nframes=10, lframes=1024, sframes=1):
"""Process the tiq input file.
Following information are extracted, except Data offset, all other are stored in the dic. Data needs to be normalized over 50 ohm.
AcquisitionBandwidth
Frequency
File name
Data I and Q [Unit is Volt]
Data Offset
DateTime
NumberSamples
Resolution Bandwidth
RFAttenuation (it is already considered in the data scaling, no need to use this value, only for info)
Sampling Frequency
Span
Voltage Scaling
"""
self.lframes = lframes
self.nframes = nframes
self.sframes = sframes
filesize = os.path.getsize(self.filename)
log.info("File size is {} bytes.".format(filesize))
with open(self.filename) as f:
line = f.readline()
self.data_offset = int(line.split("\"")[1])
with open(self.filename, 'rb') as f:
ba = f.read(self.data_offset)
xml_tree_root = et.fromstring(ba)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}AcquisitionBandwidth'):
self.acq_bw = float(elem.text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}Frequency'):
self.center = float(elem.text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}DateTime'):
self.date_time = str(elem.text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}NumberSamples'):
self.nsamples_total = int(elem.text) # this entry matches (filesize - self.data_offset) / 8) well
for elem in xml_tree_root.iter('NumericParameter'):
if 'name' in elem.attrib and elem.attrib['name'] == 'Resolution Bandwidth' and elem.attrib['pid'] == 'rbw':
self.rbw = float(elem.find('Value').text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}RFAttenuation'):
self.rf_att = float(elem.text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}SamplingFrequency'):
self.fs = float(elem.text)
for elem in xml_tree_root.iter('NumericParameter'):
if 'name' in elem.attrib and elem.attrib['name'] == 'Span' and elem.attrib['pid'] == 'globalrange':
self.span = float(elem.find('Value').text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}Scaling'):
self.scale = float(elem.text)
log.info("Center {0} Hz, span {1} Hz, sampling frequency {2} scale factor {3}.".format(self.center, self.span,
self.fs, self.scale))
log.info("Header size {} bytes.".format(self.data_offset))
log.info("Proceeding to read binary section, 32bit (4 byte) little endian.")
log.info('Total number of samples: {}'.format(self.nsamples_total))
log.info("Frame length: {0} data points = {1}s".format(lframes, lframes / self.fs))
self.nframes_tot = int(self.nsamples_total / lframes)
log.info("Total number of frames: {0} = {1}s".format(self.nframes_tot, self.nsamples_total / self.fs))
log.info("Start reading at offset: {0} = {1}s".format(sframes, sframes * lframes / self.fs))
log.info("Reading {0} frames = {1}s.".format(nframes, nframes * lframes / self.fs))
self.header = ba
total_n_bytes = 8 * nframes * lframes # 8 comes from 2 times 4 byte integer for I and Q
start_n_bytes = 8 * (sframes - 1) * lframes
try:
with open(self.filename, 'rb') as f:
f.seek(self.data_offset + start_n_bytes)
ba = f.read(total_n_bytes)
except:
log.error('File seems to end here!')
return
# return a numpy array of little endian 8 byte floats (known as doubles)
self.data_array = np.fromstring(ba, dtype='<i4') # little endian 4 byte ints.
# Scale to retrieve value in Volts. Augmented assignment does not work here!
self.data_array = self.data_array * self.scale
self.data_array = self.data_array.view(
dtype='c16') # reinterpret the bytes as a 16 byte complex number, which consists of 2 doubles.
log.info("Output complex array has a size of {}.".format(self.data_array.size))
# in order to read you may use: data = x.item()['data'] or data = x[()]['data'] other wise you get 0-d error
def read_samples(self, nsamples, offset=0):
"""
Read a specific number of samples
Parameters
----------
nsamples How many samples to read
offset Either start from the beginning, i.e. 0 or start at a different offset.
Returns
-------
"""
self.read_header()
assert nsamples < (self.nsamples_total - offset)
total_n_bytes = 8 * nsamples # 8 comes from 2 times 4 byte integer for I and Q
start_n_bytes = 8 * offset
try:
with open(self.filename, 'rb') as f:
f.seek(self.data_offset + start_n_bytes)
ba = f.read(total_n_bytes)
except:
log.error('File seems to end here!')
return
# return a numpy array of little endian 8 byte floats (known as doubles)
self.data_array = np.fromstring(ba, dtype='<i4') # little endian 4 byte ints.
# Scale to retrieve value in Volts. Augmented assignment does not work here!
self.data_array = self.data_array * self.scale
self.data_array = self.data_array.view(
dtype='c16') # reinterpret the bytes as a 16 byte complex number, which consists of 2 doubles.
log.info("Output complex array has a size of {}.".format(self.data_array.size))
# in order to read you may use: data = x.item()['data'] or data = x[()]['data'] other wise you get 0-d error
def read_header(self):
with open(self.filename) as f:
line = f.readline()
self.data_offset = int(line.split("\"")[1])
with open(self.filename, 'rb') as f:
ba = f.read(self.data_offset)
xml_tree_root = et.fromstring(ba)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}AcquisitionBandwidth'):
self.acq_bw = float(elem.text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}Frequency'):
self.center = float(elem.text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}DateTime'):
self.date_time = str(elem.text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}NumberSamples'):
self.nsamples_total = int(elem.text) # this entry matches (filesize - self.data_offset) / 8) well
for elem in xml_tree_root.iter('NumericParameter'):
if 'name' in elem.attrib and elem.attrib['name'] == 'Resolution Bandwidth' and elem.attrib['pid'] == 'rbw':
self.rbw = float(elem.find('Value').text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}RFAttenuation'):
self.rf_att = float(elem.text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}SamplingFrequency'):
self.fs = float(elem.text)
for elem in xml_tree_root.iter('NumericParameter'):
if 'name' in elem.attrib and elem.attrib['name'] == 'Span' and elem.attrib['pid'] == 'globalrange':
self.span = float(elem.find('Value').text)
for elem in xml_tree_root.iter(tag='{http://www.tektronix.com}Scaling'):
self.scale = float(elem.text)
log.info("Center {0} Hz, span {1} Hz, sampling frequency {2} scale factor {3}.".format(self.center, self.span,
self.fs, self.scale))
log.info("Header size {} bytes.".format(self.data_offset))
self.header = ba
def save_header(self):
"""Saves the header byte array into a txt tile."""
with open(self.filename_wo_ext + '.xml', 'wb') as f3:
f3.write(self.header)
log.info("Header saved in an xml file.")
|
Sara Carbonero - La mujer del arquero. End of the 2010 soccer world cup, Casillas, goalkeeper of Spain kisses his girl, a reporter interviewing himself in front of the cameras. That's her.
Marriage gift for a childhood friend.
Old warrior calango, coming from the imaginary land Caatinga governed by Padim-Padiciço.
Mad lumberjack, reminds me of grandpa.
Boomkin concept, from World of Warcraft, wearing gear.
|
###
# Copyright (c) 2012, Valentin Lorentz
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
#
# * Redistributions of source code must retain the above copyright notice,
# this list of conditions, and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright notice,
# this list of conditions, and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the author of this software nor the name of
# contributors to this software may be used to endorse or promote products
# derived from this software without specific prior written consent.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
###
import requests
from BeautifulSoup import BeautifulSoup
import supybot.utils as utils
import supybot.world as world
from supybot.commands import *
import supybot.plugins as plugins
import supybot.ircmsgs as ircmsgs
import supybot.schedule as schedule
import supybot.ircutils as ircutils
import supybot.callbacks as callbacks
from supybot.i18n import PluginInternationalization, internationalizeDocstring
_ = PluginInternationalization('Darkfallonline')
servers = (('US1', 'http://www.us1.darkfallonline.com/news'),
('EU1', 'http://www.eu1.darkfallonline.com/news'),
)
login = 'https://ams.darkfallonline.com/AMS/'
CHANNEL = '#progval'
def check_status(url):
up = False
soup = BeautifulSoup(requests.get(url).text)
status = "players"
server = "US1"
status = {'players': False, 'gms': False, 'mastergms': False,
'admins': False}
for img in soup.findAll('img'):
for type_ in status:
if img["src"].startswith("images/%s_online" % type_):
status[type_] = True
return status
def check_login_status(url):
return requests.head(url).status_code == 200
def write_errors(f):
def newf(*args, **kwargs):
try:
f(*args, **kwargs)
except Exception as e:
import traceback
traceback.print_exc(e)
return
return newf
@internationalizeDocstring
class Darkfallonline(callbacks.Plugin):
"""Add the help for "@plugin help Darkfallonline" here
This should describe *how* to use this plugin."""
threaded = True
def __init__(self, irc):
super(Darkfallonline, self).__init__(irc)
self._state = {}
for server, url in servers:
self._state[server] = check_status(url)
self._login = check_login_status(login)
schedule.addPeriodicEvent(self._announcer, 10,
'Darkfallonline_checkstatus')
def die(self):
schedule.removeEvent('Darkfallonline_checkstatus')
@write_errors
def _announcer(self):
for server, url in servers:
status = self._state[server]
new_status = check_status(url)
for irc in world.ircs:
if CHANNEL in irc.state.channels:
for type_ in new_status:
if new_status[type_] == status[type_]:
continue
elif new_status[type_]:
msg = '[%s] %s is going up' % (server,
type_.capitalize())
else:
msg = '[%s] %s is going down' % (server,
type_.capitalize())
irc.queueMsg(ircmsgs.privmsg(CHANNEL, msg))
self._state[server] = new_status
new_login_status = check_login_status(login)
if new_login_status == self._login:
pass
elif new_login_status:
irc.queueMsg(ircmsgs.privmsg(CHANNEL, '[login] Going up'))
else:
irc.queueMsg(ircmsgs.privmsg(CHANNEL, '[login] Going down'))
self._login = new_login_status
def status(self, irc, msg, args):
"""takes no arguments
Return the status of all servers."""
for server, status in self._state.items():
irc.reply('Up on %s: %s' % (server,
format('%L', [x.capitalize() for x,y in status.items() if y]) or 'none'),
private=True)
irc.reply('Login: %s' % ('on' if self._login else 'off'), private=True)
Class = Darkfallonline
# vim:set shiftwidth=4 softtabstop=4 expandtab textwidth=79:
|
The Unknown Strength Moral Stories.
The story is about two children who lived in village one was 6 and other was 10. Both were best friends just like brothers they used to stay, play, eat, bath.
One day they went a bit far away from their village and while playing, elder one, 10 year's of age, fell into a well and started screaming loudly as he didn't know swimming now the younger one who was 6 year age started looking here and there for help but could not find anyone whom he could call... then suddenly he saw a bucket, tied with a rope immediately he dropped the bucket into the well and asked his friend to hold it as soon as his friend held the bucket, he started pulling it out with his full strength he gave his 100% strength to pull the rope continuously a small child of 6 pulling the weight of an elder child of 10 he didn't stop till he managed to pull his friend out of the well until he came out.
Till here its fine We understand this story till here Now what happens, as soon as soon came out and were celebrating they had a fear in mind as soon as soon they will go back to their village they will be scolded when they would narrate there story but surprisingly nothing like this happened when they went back and narrated their story no one believe that this could be true and they were right in their thinking as the small child didn't have enough capacity to even lift a bucket full of water so lifting such a big child was just a impossible.
But there is one man in that village who believe their story named Raheem uncle, he is most elderly and wise man of the village everyone trusted him and thought if he is saying something than it must be true and everyone went together to meet him and asked how could be this possible he smiled and replied that this small child has already narrated and explained how he has done he threw the bucket inside the well elder child held the bucket and smaller child pull the bucket so you already know how he did it everybody start looking at him then he said the question is not how he could do it the question is why he could do it from where did he get this energy this has only one answer that the time when he was doing this action there was "no one to tell him that he can't do it no one was there.. not even himself Not even he himself ".
Thought: Nothing is impossible Untill no one tryed it.
If You Like... Comment And Share.
|
#! /usr/bin/python
# -*- coding: utf-8 -*-
import json, os
from django.shortcuts import redirect
from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt
from app_proyecto.models import Proyecto
from app_firmware.models import Firmware, VersionFirmware
from django.conf import settings
from django_angular import General
@csrf_exempt
def insertarVersionFirmware(request):
if request.user.is_authenticated():
idioma = request.user.persona.idioma
if request.is_ajax():
try:
_firmware = Firmware.objects.get(pk=request.POST['versionFirmwarePK'])
valor = request.POST['estadoVersionFirmware']
if 'clave' in request.POST['txtSintaxisVersion'] and 'valor' in request.POST['txtSintaxisVersion']:
_versionFirmware = VersionFirmware(firmware=_firmware,
version=request.POST['txtVersionFirmware'],
propiedadesJSON="--",
modulosJSON="--",
pinesJSON="__",
archivo=request.FILES['txtIconoFirmware'],
sintaxis=request.POST['txtSintaxisVersion'],
estado=True)
if (valor == "0"):
_versionFirmware.estado = False
else:
_versionFirmware.estado = True
_versionFirmware.save()
dict = obtenerJSON(_versionFirmware, idioma)
else:
if idioma == 'ES':
mensaje = "Debe ingresar 'clave y valor en la sintaxis'"
else:
mensaje = "You must enter 'key and value in the syntax'"
dict = {
"codError": General.codError,
"mensaje": mensaje
}
except Exception as ex:
if idioma == 'ES':
mensaje = "Ha ocurrido error interno"
else:
mensaje = 'Internal error occurred'
dict = {
"codError": General.codError,
"mensaje": mensaje
}
data_json = json.dumps(dict)
return HttpResponse(data_json, content_type="aplication/json")
else:
return redirect('/')
@csrf_exempt
def getListVersionPorFirmware(request):
if request.user.is_authenticated():
idioma = request.user.persona.idioma
if request.is_ajax():
try:
_firmware = Firmware.objects.get(pk=request.POST['versionFirmwarePK'])
_VersionFirmware = VersionFirmware.objects.filter(firmware=_firmware)
__Firmware = {'pk': _firmware.pk,
'nombre': _firmware.nombre,
'lenguaje': _firmware.lenguaje,
'icono': _firmware.icono.url,
'proyecto': _firmware.tipoProyectoCompilador.pk,
}
_listaVersionfirmware = [{'pk': i.pk,
'version': i.version,
'archivo': i.archivo.url,
'estado': i.estado
} for i in _VersionFirmware]
dict = {
"firmware": __Firmware,
"listaVersionFirmware": _listaVersionfirmware,
"codError": General.codExito
}
except:
if idioma == 'ES':
mensaje = "Ha ocurrido error interno"
else:
mensaje = 'Internal error occurred'
dict = {
"codError": General.codError,
"mensaje": mensaje
}
data_json = json.dumps(dict)
return HttpResponse(data_json, content_type="aplication/json")
else:
return redirect('/')
@csrf_exempt
def eliminarVersionFirmware(request):
if request.user.is_authenticated():
idioma = request.user.persona.idioma
if request.is_ajax():
try:
_firmware= Firmware.objects.get(pk=request.POST['FirmwarePK'])
_listaVersionFirmware = VersionFirmware.objects.filter(firmware=_firmware)
if (len(_listaVersionFirmware) == 0):
_firmware.delete()
else:
_versionFirmware = VersionFirmware.objects.get(pk=request.POST['versionFirmwarePK'])
_versionFirmware.delete()
if idioma == 'ES':
mensaje = "El firmware se ha eliminado Correctamente"
else:
mensaje = 'The firmware has been successfully deleted'
dict = {
"codError": General.codExito,
"mensaje": mensaje
}
except Exception as e:
if idioma == 'ES':
mensaje = "Ha ocurrido error interno"
else:
mensaje = 'Internal error occurred'
dict = {
"codError": General.codError,
"mensaje": mensaje
}
data_json = json.dumps(dict)
return HttpResponse(data_json, content_type="aplication/json")
else:
return redirect('/')
@csrf_exempt
def postFirmware(request):
if request.user.is_authenticated():
idioma = request.user.persona.idioma
if request.is_ajax():
try:
_proyecto = Proyecto.objects.get(pk=request.POST['proyectoCompilador'])
_firmware = Firmware(
nombre=request.POST['txtNombre'],
lenguaje=request.POST['txtLenguaje'],
icono=request.FILES['txtIcono'],
tipoProyectoCompilador=_proyecto
)
_firmware.save()
valor = request.POST['estado']
_versionFirmware = VersionFirmware(firmware=_firmware,
version="Version 1",
propiedadesJSON="--",
modulosJSON="--",
pinesJSON="__",
archivo=request.FILES['txtArchivoFirmware'],
sintaxis=request.POST['txtSintaxis'],
estado=True)
if (valor == "0"):
_versionFirmware.estado = False
else:
_versionFirmware.estado = True
_versionFirmware.save()
dict = obtenerJSON(_versionFirmware, idioma)
except Exception as ex:
_versionFirmware.delete()
if idioma == 'ES':
mensaje = "Ha ocurrido error interno"
else:
mensaje = 'Internal error occurred'
dict = {
"codError": "1111",
"mensaje": mensaje
}
data_json = json.dumps(dict)
return HttpResponse(data_json, content_type="aplication/json")
else:
return redirect('/')
def obtenerJSON(_versionFirmware, idioma):
_urlFirmware = _versionFirmware.archivo.name
_nombreFirmware = _urlFirmware.split("/")
infile = open(os.path.join(settings.BASE_DIR, 'media', 'archivo_firmware', _nombreFirmware[1]), 'r')
auxiliarInicio = False
contadorInicio = 0
mensajeJson = "{"
auxiliarModuloInicio = False
moduloJSON = "[{"
auxiliarPinesInicio = False
pinesJSON = "{"
for line in infile:
if line[:-1] == General.etiquetaInicio:
auxiliarInicio = True
contadorInicio = contadorInicio + 1
if line[:-1] == General.etiquetaFin:
contadorInicio = contadorInicio + 1
auxiliarInicio = False
if line == General.etiquetaFin:
contadorInicio = contadorInicio + 1
if line[:-1] == General.etiquetaModuloInicio:
auxiliarModuloInicio = True
if line[:-1] == General.etiquetaModuloFin:
moduloJSON = moduloJSON[:-1] + " } , { "
auxiliarModuloInicio = False
if line == General.etiquetaModuloFin:
moduloJSON = moduloJSON[:-1] + " } , { "
auxiliarModuloInicio = False
if line[:-1] == General.etiquetaPinesInicio:
auxiliarPinesInicio = True
if line[:-1] == General.etiquetaPinesFin:
auxiliarPinesInicio = False
if line == General.etiquetaPinesFin:
auxiliarPinesInicio = False
if auxiliarPinesInicio == True:
if line[:-1] != General.etiquetaPinesInicio and line[
:-1] != General.etiquetaPinesFin and line != General.etiquetaPinesFin:
pinesJSON = pinesJSON + line[:-1] + ","
if auxiliarInicio == True:
if line[:-1] != General.etiquetaInicio and line[:-1] != General.etiquetaFin and line != General.etiquetaFin:
mensajeJson = mensajeJson + line[:-1] + ","
if auxiliarModuloInicio == True:
if line[:-1] != General.etiquetaModuloInicio and line[
:-1] != General.etiquetaModuloFin and line != General.etiquetaModuloFin:
moduloJSON = moduloJSON + line[:-1] + ","
pinesJSON = pinesJSON[:-2] + "}"
mensajeJson = mensajeJson[:-1] + "}"
moduloJSON = moduloJSON[:-4] + "]"
if contadorInicio == 2:
_versionFirmware.pinesJSON = pinesJSON
_versionFirmware.propiedadesJSON = mensajeJson
_versionFirmware.modulosJSON = moduloJSON
_versionFirmware.save()
if idioma == 'ES':
mensaje = "se ha Registrado correctamente el Firmware"
else:
mensaje = 'The Firmware has been successfully registered'
dict = {
"codError": "0000",
"mensaje": mensaje
}
else:
_versionFirmware.delete()
if idioma == 'ES':
mensaje = "No se ha registrado el Firmware, Por Favor Revise el Archivo"
else:
mensaje = 'You have not registered the Firmware, Please Check the File'
dict = {
"codError": "1111",
"mensaje": mensaje
}
return dict
@csrf_exempt
def getListFirmwareTodosActivos(request):
if request.user.is_authenticated():
idioma = request.user.persona.idioma
if request.is_ajax():
try:
_version_firmware = VersionFirmware.objects.filter(estado=True)
_listafirmware = [{'pk': i.pk,
'nombre': i.firmware.nombre,
'version': i.version,
'archivo': i.archivo.url,
'propiedadesJSON': i.propiedadesJSON,
'modulosJSON': i.modulosJSON,
'pinJSON': i.pinesJSON,
'estado': False
} for i in _version_firmware]
dict = {
"listaFirmware": _listafirmware,
"codError": "0000"
}
except:
if idioma == 'ES':
mensaje = 'No se encuentra el Nombre del firmware dentro de la Línea Comando'
else:
mensaje = 'The Firmware Name is not found inside the Command Line'
dict = {
"codError": "1111",
"mensaje": mensaje
}
data_json = json.dumps(dict)
return HttpResponse(data_json, content_type="aplication/json")
else:
return redirect('/')
@csrf_exempt
def getListFirmwareActivos(request):
if request.user.is_authenticated():
idioma = request.user.persona.idioma
if request.is_ajax():
try:
_proyecto = Proyecto.objects.get(pk=request.POST['proyectoCompilador'])
_firmware = Firmware.objects.filter(tipoProyectoCompilador=_proyecto)
_version_firmware = VersionFirmware.objects.filter(firmware=_firmware, estado=True)
_listafirmware = [{'pk': i.firmware.id,
'nombre': i.firmware.nombre,
'lenguaje': i.firmware.lenguaje,
'version': i.version,
'icono': i.firmware.icono.url,
'archivo': i.archivo.url,
'proyecto': i.firmware.tipoProyectoCompilador.nombreCarpeta,
'estado': i.estado,
'firmwarePK':i.firmware.pk,
'versionPK':i.pk
} for i in _version_firmware]
dict = {
"listaFirmware": _listafirmware,
"codError": "0000"
}
except:
if idioma == 'ES':
mensaje = 'No se encuentra el Nombre del firmware dentro de la Línea Comando'
else:
mensaje = 'The Firmware Name is not found inside the Command Line'
dict = {
"codError": "1111",
"mensaje": mensaje
}
data_json = json.dumps(dict)
return HttpResponse(data_json, content_type="aplication/json")
else:
return redirect('/')
@csrf_exempt
def getListFirmwareMinimoActivos(request):
if request.user.is_authenticated():
idioma = request.user.persona.idioma
if request.is_ajax():
try:
_firmware = Firmware.objects.all()
_version_firmware = VersionFirmware.objects.filter(firmware=_firmware, estado=True)
_listafirmware = [{'pk': i.firmware.id,
'nombre': i.firmware.nombre,
'version': i.version,
'modulos': i.modulosJSON,
} for i in _version_firmware]
dict = {
"listaFirmware": _listafirmware,
"codError": "0000"
}
except:
if idioma == 'ES':
mensaje = 'No se encuentra el Nombre del firmware dentro de la Línea Comando'
else:
mensaje = 'The Firmware Name is not found inside the Command Line'
dict = {
"codError": "1111",
"mensaje": mensaje
}
data_json = json.dumps(dict)
return HttpResponse(data_json, content_type="aplication/json")
else:
return redirect('/')
@csrf_exempt
def getListFirmwareIncluirInactivos(request):
if request.user.is_authenticated():
idioma = request.user.persona.idioma
if request.is_ajax():
try:
_proyecto = Proyecto.objects.get(pk=request.POST['proyectoCompilador'])
_firmware = Firmware.objects.filter(tipoProyectoCompilador=_proyecto)
_version_firmware = VersionFirmware.objects.filter(firmware=_firmware)
_listafirmware = [{'pk': i.firmware.id,
'nombre': i.firmware.nombre,
'lenguaje': i.firmware.lenguaje,
'version': i.version,
'icono': i.firmware.icono.url,
'archivo': i.archivo.url,
'proyecto': i.firmware.tipoProyectoCompilador.nombreCarpeta,
'estado': i.estado
} for i in _version_firmware]
dict = {
"listaFirmware": _listafirmware,
"codError": "0000"
}
except:
if idioma == 'ES':
mensaje = 'No se encuentra el Nombre del firmware dentro de la Línea Comando'
else:
mensaje = 'The Firmware Name is not found inside the Command Line'
dict = {
"codError": "1111",
"mensaje": mensaje
}
data_json = json.dumps(dict)
return HttpResponse(data_json, content_type="aplication/json")
else:
return redirect('/')
|
Squier has just released three basses that just nail a particular instrument or era. There's a 70's Jazz Bass, a white Precision and this no pickguard sunburst fretless Jazz Bass.
Whether you were looking for a right looking fretless or just want to start on fretless, don't miss this chance to get this affordable bass. Extensive gigging and stage abuse recommended in order to get that great Jaco worn out bass look!
Squier wanted to pay tribute to several classic basses (hence the "vintage" nickname ), and at the same time admit that many players immediately upgrade budget basses' electronics with upper market options, and has chosen Duncan Designed pickups (hence the "modified" adjective).
Click here to read a Bass Player Magazine review on these basses.
Disciples of groove will severely dig Squier’s elegantly slinky new Vintage Modified Jazz Bass Fretless, which fuses the warm, expressive voice of an upright with the defined attack of an electric.
The no-pickguard, Three-color Sunburst finish is a classic, and the lightweight agathis body is home to a pair of single-coil Duncan Designed™ Jazz Bass pickups. The special one-piece maple neck has a fretless Ebonol fingerboard with white celluloid lines that let you know where you’re at! Other features include a four-saddle chrome bridge, and chrome hardware and machine heads.
|
#
# Initially copied from:
# https://raw.githubusercontent.com/pypa/sampleproject/master/setup.py
#
from setuptools import setup, find_packages
import os
import codecs
here = os.path.abspath(os.path.dirname(__file__))
with codecs.open(os.path.join(here, 'README.rst'), encoding='utf-8') as f:
long_description = f.read()
setup(
name='networkzero',
version='1.0b1',
description='Making networking simple for teachers',
long_description=long_description,
url='https://github.com/tjguk/networkzero',
author='Tim Golden',
author_email='mail@timgolden.me.uk',
license='MIT',
classifiers=[
'Development Status :: 4 - Beta',
'Intended Audience :: Developers',
'Topic :: Software Development :: Build Tools',
'License :: OSI Approved :: MIT License',
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 2.7',
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
],
keywords='networking education',
packages=find_packages(exclude=['contrib', 'docs', 'tests']),
install_requires=[
'pyzmq==16.0.3',
'netifaces==0.10.6',
],
)
|
This collection of components relating to the history of computing is supplied courtesy of the Division of Laptop Science at Virginia Tech , and is sponsored in part by a grant from the National Science Foundation ( CDA-9312611 ). It’s lacking in a lot of unique points, but it has almost everything on the most important show of the history of computer systems from early occasions to now, with tons of history and devices that are also old for most men and women to recognize. It us house to the biggest international collection of computing artifacts in the planet, like computer system hardware, software, documentation, ephemera, photographs and moving photos. The stories that poured out about the museum pieces, in addition to all the labels, definitely brought the practical experience to life for me.
If these are not adequate, quench your thirst for photos of Computer history at the Wave Report , exactly where a lot more than 1000 shots of the Laptop Museum’s collection are hosted. Computer History : A total history of the early personal computer, starting with abacuses and ending in the early 1980s. The Deutsches Museum in Munich has an in depth computer system section , with some pictures of their large collection of early mathematical instruments. When we homeschooled one particular of our preferred books was a timeline of history throughout the world.
Even pretty young youngsters can use a timeline to see the span of time in between events. I am not super into technology-I imply it doesn’t excite me. Even so, I like history and completely enjoyed finding out about the history of the personal computer-which kinda surprised me, ha! A timeline of computer history events is available from ComputerHope , with a lot of entries for Apple, Windows, and Unix. Pictured to the correct is a wall chart timeline from Konos It is a large, sturdy timeline that can be applied with any curriculum. This may well go without having saying but you need to be seriously into computers to like this museum.
The name TCM had been retained by the Boston Museum of Science so, in 2000, the name TCMHC was changed to Computer History Museum (CHM). In 2008 Monash University held a series of events to celebrate the 50th anniversary of the creation of the University. This poster was removed in 2015 in response to protest that considered that the poster perpetuated a gender biased view of Personal computer Science and Stanford CSD. After manufacturers started establishing smaller and smaller technologies, the private pc revolution truly took off. The Computer History Museum chronicles the 2,000-plus year history of computing.
Various new computer models came out every single year, each and every a single more highly effective than the ones prior to. Fascinating even for non-geeks though kids will not locate lots of interactive exhibits, surprising for a personal computer museum. The University of Virginia also has a computer museum, with photos of some of the exhibits on their web page along with links to other laptop or computer museums.
|
# -*- coding: utf-8 -*-
#
#
# Author: Yannick Vaucher, Leonardo Pistone
# Copyright 2014-2015 Camptocamp SA
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
#
from openerp import models, fields
class SaleOrder(models.Model):
_inherit = 'sale.order'
LO_STATES = {
'cancel': [('readonly', True)],
'progress': [('readonly', True)],
'manual': [('readonly', True)],
'shipping_except': [('readonly', True)],
'invoice_except': [('readonly', True)],
'done': [('readonly', True)],
}
consignee_id = fields.Many2one(
'res.partner',
string='Consignee',
states=LO_STATES,
help="The person to whom the shipment is to be delivered.")
|
Philippine politics from the perspective of a research professional, an activist, and a student. An attempt to provoke readers into responsible and critical analysis of hot-button issues.
There was no intention for me to begin my post-college career as early as I did. Nevertheless, I made it a point to devote the last days of my university life to job hunting.
The promise of a Daang Matuwid has always been a foremost priority of the Aquino government. Although noble, it exposes the vulnerability of the present administration’s effort when it comes to cultivating the economic progress of the country.
|
#!/usr/bin/env python
# vim: set expandtab shiftwidth=4:
'''
* This file is part of A2Billing (http://www.a2billing.net/)
*
* A2Billing, Commercial Open Source Telecom Billing platform,
* powered by Star2billing S.L. <http://www.star2billing.com/>
*
* @copyright Copyright (C) 2004-2009 - Star2billing S.L.
* @author Belaid Arezqui <areski@gmail.com>
* @license http://www.fsf.org/licensing/licenses/agpl-3.0.html
* @package A2Billing
*
* Software License Agreement (GNU Affero General Public License)
*
* This program is free software: you can redistribute it and/or modify
* it under the terms of the GNU Affero General Public License as
* published by the Free Software Foundation, either version 3 of the
* License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU Affero General Public License for more details.
*
* You should have received a copy of the GNU Affero General Public License
* along with this program. If not, see <http://www.gnu.org/licenses/>.
database.py
module to connect to Postgresql & Mysql Database and manipulate database information .
'''
__author__ = "Belaid Arezqui (areski@gmail.com)"
__copyright__ = "Copyright (C) Belaid Arezqui"
__revision__ = "$Id$"
__version__ = "1.00"
# ------------------------------ IMPORT ------------------------------
import sys
INTP_VER = sys.version_info[:2]
if INTP_VER < (2, 2):
raise RuntimeError("Python v.2.2 or later needed")
import ConfigParser
from sqlalchemy import *
from sqlalchemy import orm
from sqlalchemy.orm import sessionmaker
import datetime, time
# ------------------------------ CLASS ------------------------------
class SQLError(Exception):
''' Error exception class '''
def __init__(self, value):
self.value = value
def __str__(self):
return repr(self.value)
class ConnectionError(Exception): pass
class SQlRow_Empty(Exception): pass
# Class for the ORM
# These are the empty classes that will become our data classes
class CallBack_Spool(object):
pass
class Server_Group(object):
pass
class Server_Manager(object):
pass
class callback_database:
"""Daemon base class"""
config_filename = None
section = 'database' # override this
"A class to handle all modification on DB"
dbname = ''
dbhost = ''
dbport = None
dbopt = ''
dbtty = ''
dbuser = ''
dbpasswd = ''
dbtype = ''
count_server_manager = 0
# TODO : create it with protected __ for better design
def __init__(self):
# cool to call a function to fetch the conf
self.read_basic_config()
self.db_connect()
def read_basic_config(self):
"""Read basic options from the daemon config file"""
cp = ConfigParser.ConfigParser()
cp.read([self.config_filename])
self.config_parser = cp
self.dbname = cp.get(self.section, 'dbname')
self.dbhost = cp.get(self.section, 'hostname')
self.dbport = cp.get(self.section, 'port')
self.dbuser = cp.get(self.section, 'user')
self.dbpasswd = cp.get(self.section, 'password')
self.dbtype = cp.get(self.section, 'dbtype')
def status_on (self, status):
if (status.lower()=='on') :
return 'ACTIVE'
else :
return 'INACTIVE'
def db_connect (self):
if (len(self.dbpasswd) > 0) :
connection_string = self.dbtype + "://" + self.dbuser + ":" + self.dbpasswd + "@" + self.dbhost + "/" + self.dbname
else :
connection_string = self.dbtype + "://" + self.dbuser + "@" + self.dbhost + "/" + self.dbname
try:
self.engine = create_engine(connection_string)
self.engine.echo = False # Try changing this to True and see what happens
self.metadata = MetaData(self.engine)
Session = sessionmaker(bind=self.engine, autoflush=True)
# create a Session
self.session = Session()
self.cc_callback_spool = Table('cc_callback_spool', self.metadata, autoload=True)
self.cc_server_group = Table('cc_server_group', self.metadata, autoload=True)
self.cc_server_manager = Table('cc_server_manager', self.metadata, autoload=True)
# map to the class
CallBack_Spool_mapper = orm.mapper(CallBack_Spool, self.cc_callback_spool)
Server_Group_mapper = orm.mapper(Server_Group, self.cc_server_group)
Server_Manager_mapper = orm.mapper(Server_Manager, self.cc_server_manager)
self.CallBack_Spool_q = self.session.query(CallBack_Spool)
self.Server_Manager_q = self.session.query(Server_Manager)
except Exception, error_message:
#print "connection error to " + connection_string
raise ConnectionError(error_message)
def db_close (self):
try:
self.session.flush()
except Exception, error_message:
raise SQLError(error_message)
def count_callback_spool(self):
return self.CallBack_Spool_q.filter((self.cc_callback_spool.c.status=='PENDING')).count()
def find_server_manager(self, c_id_group):
get_Server_Manager = self.Server_Manager_q.filter(
(self.cc_server_manager.c.id_group==c_id_group)
).all()
return get_Server_Manager
def find_server_manager_roundrobin(self, c_id_group):
nball_Server_Manager = self.Server_Manager_q.filter(
(self.cc_server_manager.c.id_group==c_id_group)
).count()
if (nball_Server_Manager == 0):
raise SQlRow_Empty("No Server_Manager has been found for this idgroup : "+ str(c_id_group))
nb_sel_Server_Manager = (self.count_server_manager % nball_Server_Manager) + 1
selected_Server_Manager = self.Server_Manager_q.get(nb_sel_Server_Manager)
self.count_server_manager = self.count_server_manager + 1
return selected_Server_Manager
def find_callback_request(self, c_status = 'PENDING', c_hours = 24):
get_CallBack_Spool = self.CallBack_Spool_q.filter(
(self.cc_callback_spool.c.status==c_status) &
(self.cc_callback_spool.c.entry_time > datetime.datetime.now() - datetime.timedelta(hours=c_hours)) &
((self.cc_callback_spool.c.callback_time==None) | (self.cc_callback_spool.c.callback_time < datetime.datetime.now()))
).all()
return get_CallBack_Spool
def update_callback_request (self, c_id, c_status):
try:
get_CallBack_Spool = self.CallBack_Spool_q.filter((self.cc_callback_spool.c.id == c_id)).one()
get_CallBack_Spool.status = c_status
self.session.flush()
except:
#print "--- nothing to update ---"
pass
def update_callback_request_server (self, c_id, c_status, c_id_server, c_manager_result):
try:
get_CallBack_Spool = self.CallBack_Spool_q.filter((self.cc_callback_spool.c.id == c_id)).one()
get_CallBack_Spool.status = c_status
get_CallBack_Spool.id_server = c_id_server
get_CallBack_Spool.manager_result = c_manager_result
get_CallBack_Spool.num_attempt += 1
get_CallBack_Spool.last_attempt_time = func.now();
self.session.flush()
except:
#print "--- nothing to update ---"
pass
# ------------------------------ MAIN ------------------------------
if __name__ == "__main__":
"""
print "\n\n"
inst_cb_db = callback_database()
print inst_cb_db.count_callback_spool()
print
get_CallBack_Spool = inst_cb_db.find_callback_request('SENT', 121212)
for p in get_CallBack_Spool[0:5]:
print p.id,' ===========>>> >>> ',p.uniqueid, '>> ',p.status, '>> ',p.num_attempt, ' ::>> ',p.id_server, ' ::>> ',p.manager_result
inst_cb_db.update_callback_request (5, 'SENT')
inst_cb_db.update_callback_request (5, 'SENT')
inst_cb_db.update_callback_request_server (5, 'SENT', 77, 'rhaaaaaaaa')
print
get_Server_Manager = inst_cb_db.find_server_manager(1)
for p in get_Server_Manager[0:5]:
print p.id,' ===========>>> >>> ',p.id_group, '>> ',p.server_ip, '>> ',p.manager_username
try:
get_Server_Manager = inst_cb_db.find_server_manager_roundrobin(11)
print get_Server_Manager.id,' ===========>>> >>> ',get_Server_Manager.id_group, '>> ',get_Server_Manager.server_ip, '>> ',get_Server_Manager.manager_username
except:
print "--- no manager ---"
pass
"""
|
Stay in the vibrant epicenter of New York City: Times Square. Whether traveling on business or for leisure, our central location in Midtown Manhattan is sure to be more comfortable, more productive and more enjoyable than ever before. From shopping and dining to sightseeing and business, you’re only steps from the best Manhattan has to offer.
|
#!/usr/bin/env python3
#-*- coding: iso-8859-1 -*-
################################################################################
#
# Function guards for Python 3.
#
# (c) 2016, Dmitry Dvoinikov <dmitry@targeted.org>
# Distributed under MIT license.
#
# Samples:
#
# from funcguard import guard
#
# @guard
# def abs(a, _when = "a >= 0"):
# return a
#
# @guard
# def abs(a, _when = "a < 0"):
# return -a
#
# assert abs(1) == abs(-1) == 1
#
# @guard
# def factorial(n): # no _when expression => default
# return 1
#
# @guard
# def factorial(n, _when = "n > 1"):
# return n * factorial(n - 1)
#
# assert factorial(10) == 3628800
#
# class TypeTeller:
# @staticmethod
# @guard
# def typeof(value, _when = "isinstance(value, int)"):
# return int
# @staticmethod
# @guard
# def typeof(value, _when = "isinstance(value, str)"):
# return str
#
# assert TypeTeller.typeof(0) is int
# TypeTeller.typeof(0.0) # throws
#
# class AllowedProcessor:
# def __init__(self, allowed):
# self._allowed = allowed
# @guard
# def process(self, value, _when = "value in self._allowed"):
# return "ok"
# @guard
# def process(self, value): # no _when expression => default
# return "fail"
#
# ap = AllowedProcessor({1, 2, 3})
# assert ap.process(1) == "ok"
# assert ap.process(0) == "fail"
#
# guard.default_eval_args( # values to insert to all guards scopes
# office_hours = lambda: 9 <= datetime.now().hour < 18)
#
# @guard
# def at_work(*args, _when = "office_hours()", **kwargs):
# print("welcome")
#
# @guard
# def at_work(*args, **kwargs):
# print("come back tomorrow")
#
# at_work() # either "welcome" or "come back tomorrow"
#
# The complete source code with self-tests is available from:
# https://github.com/targeted/funcguard
#
################################################################################
__all__ = [ "guard", "GuardException", "IncompatibleFunctionsException",
"FunctionArgumentsMatchException", "GuardExpressionException",
"DuplicateDefaultGuardException", "GuardEvalException",
"NoMatchingFunctionException" ]
################################################################################
import inspect; from inspect import getfullargspec
import functools; from functools import wraps
import sys; from sys import modules
try:
(lambda: None).__qualname__
except AttributeError:
import qualname; from qualname import qualname # prior to Python 3.3 workaround
else:
qualname = lambda f: f.__qualname__
################################################################################
class GuardException(Exception): pass
class IncompatibleFunctionsException(GuardException): pass
class FunctionArgumentsMatchException(GuardException): pass
class GuardExpressionException(GuardException): pass
class DuplicateDefaultGuardException(GuardException): pass
class GuardEvalException(GuardException): pass
class NoMatchingFunctionException(GuardException): pass
################################################################################
# takes an argument specification for a function and a set of actual call
# positional and keyword arguments, returns a flat namespace-like dict
# mapping parameter names to their actual values
def _eval_args(argspec, args, kwargs):
# match positional arguments
matched_args = {}
expected_args = argspec.args
default_args = argspec.defaults or ()
_many = lambda t: "argument" + ("s" if len(t) != 1 else "")
# copy provided args to expected, append defaults if necessary
for i, name in enumerate(expected_args):
if i < len(args):
value = args[i]
elif i >= len(expected_args) - len(default_args):
value = argspec.defaults[i - len(expected_args) + len(default_args)]
else:
missing_args = expected_args[len(args):len(expected_args) - len(default_args)]
raise FunctionArgumentsMatchException("missing required positional {0:s}: {1:s}".\
format(_many(missing_args), ", ".join(missing_args)))
matched_args[name] = value
# put extra provided args to *args if the function allows
if argspec.varargs:
matched_args[argspec.varargs] = args[len(expected_args):] if len(args) > len(expected_args) else ()
elif len(args) > len(expected_args):
raise FunctionArgumentsMatchException(
"takes {0:d} positional {1:s} but {2:d} {3:s} given".
format(len(expected_args), _many(expected_args),
len(args), len(args) == 1 and "was" or "were"))
# match keyword arguments
matched_kwargs = {}
expected_kwargs = argspec.kwonlyargs
default_kwargs = argspec.kwonlydefaults or {}
# extract expected kwargs from provided, using defaults if necessary
missing_kwargs = []
for name in expected_kwargs:
if name in kwargs:
matched_kwargs[name] = kwargs[name]
elif name in default_kwargs:
matched_kwargs[name] = default_kwargs[name]
else:
missing_kwargs.append(name)
if missing_kwargs:
raise FunctionArgumentsMatchException("missing required keyword {0:s}: {1:s}".\
format(_many(missing_kwargs), ", ".join(missing_kwargs)))
extra_kwarg_names = [ name for name in kwargs if name not in matched_kwargs ]
if argspec.varkw:
if extra_kwarg_names:
extra_kwargs = { name: kwargs[name] for name in extra_kwarg_names }
else:
extra_kwargs = {}
matched_args[argspec.varkw] = extra_kwargs
elif extra_kwarg_names:
raise FunctionArgumentsMatchException("got unexpected keyword {0:s}: {1:s}".\
format(_many(extra_kwarg_names), ", ".join(extra_kwarg_names)))
# both positional and keyword argument are returned in the same scope-like dict
for name, value in matched_kwargs.items():
matched_args[name] = value
return matched_args
################################################################################
# takes an argument specification for a function, from it extracts and returns
# a compiled expression which is to be matched against call arguments
def _get_guard_expr(func_name, argspec):
guard_expr_text = None
if "_when" in argspec.args:
defaults = argspec.defaults or ()
i = argspec.args.index("_when")
if i >= len(argspec.args) - len(defaults):
guard_expr_text = defaults[i - len(argspec.args) + len(defaults)]
elif "_when" in argspec.kwonlyargs:
guard_expr_text = (argspec.kwonlydefaults or {}).get("_when")
else:
return None # indicates default guard
if guard_expr_text is None:
raise GuardExpressionException("guarded function {0:s}() requires a \"_when\" "
"argument with guard expression text as its "
"default value".format(func_name))
try:
guard_expr = compile(guard_expr_text, func_name, "eval")
except Exception as e:
error = str(e)
else:
error = None
if error is not None:
raise GuardExpressionException("invalid guard expression for {0:s}(): "
"{1:s}".format(func_name, error))
return guard_expr
################################################################################
# checks whether two functions' argspecs are compatible to be guarded as one,
# compatible argspecs have identical positional and keyword parameters except
# for "_when" and annotations
def _compatible_argspecs(argspec1, argspec2):
return _stripped_argspec(argspec1) == _stripped_argspec(argspec2)
def _stripped_argspec(argspec):
args = argspec.args[:]
defaults = list(argspec.defaults or ())
kwonlyargs = argspec.kwonlyargs[:]
kwonlydefaults = (argspec.kwonlydefaults or {}).copy()
if "_when" in args:
i = args.index("_when")
if i >= len(args) - len(defaults):
del defaults[i - len(args) + len(defaults)]
del args[i]
elif "_when" in kwonlyargs and "_when" in kwonlydefaults:
i = kwonlyargs.index("_when")
del kwonlyargs[i]
del kwonlydefaults["_when"]
return (args, defaults, kwonlyargs, kwonlydefaults, argspec.varargs, argspec.varkw)
################################################################################
def guard(func, module = None): # the main decorator function
# see if it is a function of a lambda
try:
eval(func.__name__)
except SyntaxError:
return func # <lambda> => not guarded
except NameError:
pass # valid name
# get to the bottom of a possible decorator chain
# to get the original function's specification
original_func = func
while hasattr(original_func, "__wrapped__"):
original_func = original_func.__wrapped__
func_name = qualname(original_func)
func_module = module or modules[func.__module__] # module serves only as a place to keep state
argspec = getfullargspec(original_func)
# the registry of known guarded function is attached to the module containg them
guarded_functions = getattr(func_module, "__guarded_functions__", None)
if guarded_functions is None:
guarded_functions = func_module.__guarded_functions__ = {}
original_argspec, first_guard, last_guard = guard_info = \
guarded_functions.setdefault(func_name, [argspec, None, None])
# all the guarded functions with the same name must have identical signature
if argspec is not original_argspec and not _compatible_argspecs(argspec, original_argspec):
raise IncompatibleFunctionsException("function signature is incompatible "
"with the previosly registered {0:s}()".format(func_name))
@wraps(func)
def func_guard(*args, **kwargs): # the call proxy function
# since all versions of the function have essentially identical signatures,
# their mapping to the actually provided arguments can be calculated once
# for each call and not against every version of the function
try:
eval_args = _eval_args(argspec, args, kwargs)
except FunctionArgumentsMatchException as e:
error = str(e)
else:
error = None
if error is not None:
raise FunctionArgumentsMatchException("{0:s}() {1:s}".format(func_name, error))
for name, value in guard.__default_eval_args__.items():
eval_args.setdefault(name, value)
# walk the chain of function versions starting with the first, looking
# for the one for which the guard expression evaluates to truth
current_guard = func_guard.__first_guard__
while current_guard:
try:
if not current_guard.__guard_expr__ or \
eval(current_guard.__guard_expr__, globals(), eval_args):
break
except Exception as e:
error = str(e)
else:
error = None
if error is not None:
raise GuardEvalException("guard expression evaluation failed for "
"{0:s}(): {1:s}".format(func_name, error))
current_guard = current_guard.__next_guard__
else:
raise NoMatchingFunctionException("none of the guard expressions for {0:s}() "
"matched the call arguments".format(func_name))
return current_guard.__wrapped__(*args, **kwargs) # call the winning function version
# in different version of Python @wraps behaves differently with regards
# to __wrapped__, therefore we set it the way we need it here
func_guard.__wrapped__ = func
# the guard expression is attached
func_guard.__guard_expr__ = _get_guard_expr(func_name, argspec)
# maintain a linked list for all versions of the function
if last_guard and not last_guard.__guard_expr__: # the list is not empty and the
# last guard is already a default
if not func_guard.__guard_expr__:
raise DuplicateDefaultGuardException("the default version of {0:s}() has already "
"been specified".format(func_name))
# the new guard has to be inserted one before the last
if first_guard is last_guard: # the list contains just one guard
# new becomes first, last is not changed
first_guard.__first_guard__ = func_guard.__first_guard__ = func_guard
func_guard.__next_guard__ = first_guard
first_guard = guard_info[1] = func_guard
else: # the list contains more than one guard
# neither first nor last are changed
prev_guard = first_guard
while prev_guard.__next_guard__ is not last_guard:
prev_guard = prev_guard.__next_guard__
func_guard.__first_guard__ = first_guard
func_guard.__next_guard__ = last_guard
prev_guard.__next_guard__ = func_guard
else: # the new guard is inserted last
if not first_guard:
first_guard = guard_info[1] = func_guard
func_guard.__first_guard__ = first_guard
func_guard.__next_guard__ = None
if last_guard:
last_guard.__next_guard__ = func_guard
last_guard = guard_info[2] = func_guard
return func_guard
guard.__default_eval_args__ = {}
guard.default_eval_args = lambda *args, **kwargs: guard.__default_eval_args__.update(*args, **kwargs)
################################################################################
# EOF
|
Laptop Safe is the solution to laptop theft, it protects your company’s or your personal computers and data discreetly and effectively.
Laptop Safe is an innovative solution in laptop security designed and tested to reduce laptop theft.
Over the last few years we have seen a huge upturn in the use of laptop computers from one-man operations to multi-nationals, hospitals and schools. Unfortunately, this has led to a consequent rise in laptop thefts from offices and vehicles. We are proud to offer our new in-house designed and built Laptop Security Safes. Because we manufacture on the premises we are able to offer a tailor-made service for large batch orders, as well as our off-the-peg range.
Our Company has suffered several thefts of laptops over the last few months. This has generated alot of concern over the data which has gone missing. The LaptopSafe is an excellent solution in solving this problem.
With the number of laptops used by both range of Laptop Storage products provides cost effective lockable storage for laptops and laptop accessories..
The value of IT losses is generally underestimated, with the real value of the contents of the laptop being difficult to assess. The chance of a stolen laptop being recovered is virtually nil !
Last year around 100,000 laptops were stolen from vehicles, that’s over 270 laptops per day and 6.2% of vehicle insurance claims were the result of laptop theft.
LaptopSafe is the solution to laptop theft. Manufactured from steel with a strong locking mechanism and unique steel cable and locking bolt – virtually impregnable – no tools or expertise required to fit. LaptopSafe is an effective deterrent – buy your peace of mind today.
Terms & Conditions © 2019 LaptopSafe. All rights reserved.
|
# Copyright 2017-2021 TensorHub, Inc.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import datetime
import functools
import importlib
import inspect
import logging
import os
import sys
import threading
import warnings
import six
with warnings.catch_warnings():
warnings.simplefilter("ignore", Warning)
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
warnings.filterwarnings("ignore", message="numpy.ufunc size changed")
try:
import pandas as pd
except ImportError:
raise RuntimeError(
"guild.ipy requires pandas - install it first before using "
"this module (see https://pandas.pydata.org/pandas-docs/stable/"
"install.html for help)"
)
# ipy makes use of the full Guild API and so, like main_bootstrap,
# requires the external modules.
from guild import main_bootstrap
main_bootstrap.ensure_external_path()
from guild import batch_util
from guild import click_util
from guild import config
from guild import exit_code
from guild import index as indexlib
from guild import model_proxy
from guild import op_util
from guild import opref as opreflib
from guild import run as runlib
from guild import run_util
from guild import summary
from guild import util
from guild import var
from guild.commands import runs_impl
log = logging.getLogger("guild")
RUN_DETAIL = [
"id",
"operation",
"status",
"started",
"stopped",
"label",
"run_dir",
]
DEFAULT_MAX_TRIALS = 20
class RunException(Exception):
def __init__(self, run, from_exc):
super(RunException, self).__init__(run, from_exc)
self.run = run
self.from_exc = from_exc
class RunError(RunException):
pass
class RunTerminated(RunException):
pass
class OutputTee(object):
def __init__(self, fs, lock):
self._fs = fs
self._lock = lock
def write(self, s):
with self._lock:
for f in self._fs:
f.write(s)
def flush(self):
with self._lock:
for f in self._fs:
f.flush()
class RunOutput(object):
def __init__(self, run, summary=None):
self.run = run
self.summary = summary
self._f = None
self._f_lock = None
self._stdout = None
self._stderr = None
def __enter__(self):
self._f = open(self.run.guild_path("output"), "w")
self._f_lock = threading.Lock()
self._stdout = sys.stdout
sys.stdout = OutputTee(self._tee_fs(sys.stdout), self._f_lock)
self._stderr = sys.stderr
sys.stderr = OutputTee(self._tee_fs(sys.stderr), self._f_lock)
def _tee_fs(self, iof):
fs = [iof, self._f]
if self.summary:
fs.append(self.summary)
return fs
def __exit__(self, *exc):
with self._f_lock:
self._f.close()
if self.summary:
self.summary.close()
sys.stdout = self._stdout
sys.stderr = self._stderr
@functools.total_ordering
class RunIndex(object):
def __init__(self, run, fmt):
self.value = run
self.run = run # backward compatible alias
self.fmt = fmt
def __str__(self):
return self.value.short_id
def __eq__(self, x):
return self._x_id(x) == self.value.id
def __lt__(self, x):
return self.value.id < self._x_id(x)
@staticmethod
def _x_id(x):
if isinstance(x, six.string_types):
return x
elif isinstance(x, RunIndex):
return x.value.id
return None
class RunsSeries(pd.Series):
@property
def _constructor(self):
return RunsSeries
@property
def _constructor_expanddim(self):
return RunsDataFrame
def delete(self, **kw):
self.to_frame().delete(**kw)
def info(self, **kw):
_print_run_info(self[0], **kw)
def scalars(self):
return _runs_scalars([self[0].value])
def scalars_detail(self):
return _runs_scalars_detail([self[0].value])
def flags(self):
return _runs_flags([self[0].value])
def compare(self):
return _runs_compare([self[0]])
class RunsDataFrame(pd.DataFrame):
@property
def _constructor(self):
return RunsDataFrame
@property
def _constructor_sliced(self):
return RunsSeries
@property
def _constructor_expanddim(self):
return RunsDataFrame
def delete(self, permanent=False):
runs = self._runs()
var.delete_runs(runs, permanent)
return [run.id for run in runs]
def _runs(self):
return [row[1][0].value for row in self.iterrows()]
def _items(self):
return [row[1][0] for row in self.iterrows()]
# pylint: disable=arguments-differ
def info(self, *args, **kw):
self.loc[0].info(*args, **kw)
def scalars(self):
return _runs_scalars(self._runs())
def scalars_detail(self):
return _runs_scalars_detail(self._runs())
def flags(self):
return _runs_flags(self._runs())
def compare(self):
return _runs_compare(self._items())
class Batch(object):
def __init__(self, gen_trials, op, flag_vals, opts):
self.gen_trials = gen_trials
self.op = op
self.flag_vals = _coerce_range_functions(flag_vals)
self.opts = opts
def __call__(self):
runs = []
results = []
prev_results_cb = lambda: (runs, results)
for trial in self.gen_trials(self.flag_vals, prev_results_cb, **self.opts):
trial_flag_vals, trial_attrs = _split_gen_trial(trial)
print(
"Running %s (%s):"
% (self.op.__name__, op_util.flags_desc(trial_flag_vals))
)
run, result = _run(self.op, trial_flag_vals, self.opts, trial_attrs)
runs.append(run)
results.append(result)
return runs, results
def _split_gen_trial(trial):
if isinstance(trial, tuple):
assert len(trial) == 2, ("generated trial must be a two-tuple or a dict", trial)
return trial
else:
return trial, {}
def _coerce_range_functions(flag_vals):
return {name: _coerce_range_function(val) for name, val in flag_vals.items()}
def _coerce_range_function(val):
if isinstance(val, RangeFunction):
return str(val)
return val
class RangeFunction(object):
def __init__(self, name, *args):
self.name = name
self.args = args
def __str__(self):
args = ":".join([str(arg) for arg in self.args])
return "%s[%s]" % (self.name, args)
def batch_gen_trials(flag_vals, _prev_trials_cb, max_trials=None, **kw):
if kw:
log.warning("ignoring batch config: %s", kw)
max_trials = max_trials or DEFAULT_MAX_TRIALS
trials = 0
for trial_flag_vals in batch_util.expand_flags(flag_vals):
if trials >= max_trials:
return
trials += 1
yield trial_flag_vals
def optimizer_trial_generator(model_op):
main_mod = _optimizer_module(model_op.module_name)
try:
return main_mod.gen_trials
except AttributeError:
raise TypeError(
"%s optimizer module does not implement gen_trials" % main_mod.__name__
)
def _optimizer_module(module_name):
return importlib.import_module(module_name)
def uniform(low, high):
return RangeFunction("uniform", low, high)
def loguniform(low, high):
return RangeFunction("loguniform", low, high)
def run(op, *args, **kw):
if not callable(op):
raise ValueError("op must be callable")
opts = _pop_opts(kw)
flag_vals = _init_flag_vals(op, args, kw)
run = _init_runner(op, flag_vals, opts)
return run()
def _pop_opts(kw):
opts = {}
for name in list(kw):
if name[:1] == "_":
opts[name[1:]] = kw.pop(name)
return opts
def _init_flag_vals(op, args, kw):
# pylint: disable=deprecated-method
op_f = _op_f(op)
op_flag_vals = inspect.getcallargs(op_f, *args, **kw)
_remove_bound_method_self(op_f, op_flag_vals)
return _coerce_slice_vals(op_flag_vals)
def _op_f(op):
assert callable(op), repr(op)
if inspect.isfunction(op) or inspect.ismethod(op):
return op
assert hasattr(op, "__call__")
return op.__call__
def _remove_bound_method_self(op, op_flag_vals):
im_self = util.find_apply(
[
lambda: getattr(op, "__self__", None),
lambda: getattr(op, "im_self", None),
]
)
if im_self:
for key, val in op_flag_vals.items():
if val is im_self:
del op_flag_vals[key]
break
else:
assert False, (op_flag_vals, im_self)
def _coerce_slice_vals(flag_vals):
return {name: _coerce_slice_val(val) for name, val in flag_vals.items()}
def _coerce_slice_val(val):
if isinstance(val, slice):
return uniform(val.start, val.stop)
return val
def _init_runner(op, flag_vals, opts):
return util.find_apply(
[_optimize_runner, _batch_runner, _single_runner], op, flag_vals, opts
)
def _optimize_runner(op, flag_vals, opts):
optimizer = opts.get("optimizer")
if not optimizer:
return _maybe_random_runner(op, flag_vals, opts)
opts = _filter_kw(opts, ["optimizer"])
return Batch(_init_gen_trials(optimizer), op, flag_vals, opts)
def _filter_kw(opts, keys):
return {k: v for k, v in opts.items() if k not in keys}
def _maybe_random_runner(op, flag_vals, opts):
assert not opts.get("optimizer"), opts
for val in flag_vals.values():
if isinstance(val, RangeFunction):
return Batch(_init_gen_trials("random"), op, flag_vals, opts)
return None
def _init_gen_trials(optimizer):
try:
model_op, _name = model_proxy.resolve_plugin_model_op(optimizer)
except model_proxy.NotSupported:
raise TypeError("optimizer %r is not supported" % optimizer)
else:
return optimizer_trial_generator(model_op)
def _batch_runner(op, flag_vals, opts):
for val in flag_vals.values():
if isinstance(val, list):
return Batch(batch_gen_trials, op, flag_vals, opts)
return None
def _single_runner(op, flag_vals, opts):
return lambda: _run(op, flag_vals, opts)
def _run(op, flag_vals, opts, extra_attrs=None):
run = _init_run()
_init_run_attrs(run, op, flag_vals, opts, extra_attrs)
summary = _init_output_scalars(run, opts)
try:
with RunOutput(run, summary):
_write_proc_lock(run)
with util.Chdir(run.path):
result = op(**flag_vals)
except KeyboardInterrupt as e:
exit_status = exit_code.KEYBOARD_INTERRUPT
util.raise_from(RunTerminated(run, e), e)
except Exception as e:
exit_status = exit_code.DEFAULT_ERROR
util.raise_from(RunError(run, e), e)
else:
exit_status = 0
return run, result
finally:
_finalize_run(run, exit_status)
def _init_run():
run_id = runlib.mkid()
run_dir = os.path.join(var.runs_dir(), run_id)
run = runlib.Run(run_id, run_dir)
run.init_skel()
return run
def _init_run_attrs(run, op, flag_vals, opts, extra_attrs):
opref = opreflib.OpRef("func", "", "", "", _op_name(op, opts))
run.write_opref(opref)
run.write_attr("started", runlib.timestamp())
run.write_attr("flags", flag_vals)
run.write_attr("label", _run_label(flag_vals, opts))
if extra_attrs:
for name, val in extra_attrs.items():
run.write_attr(name, val)
def _op_name(op, opts):
return opts.get("op_name") or _default_op_name(op)
def _default_op_name(op):
if inspect.isfunction(op) or inspect.ismethod(op):
return op.__name__
return op.__class__.__name__
def _run_label(flag_vals, opts):
return op_util.run_label(_label_template(opts), flag_vals)
def _label_template(opts):
return util.find_apply([_explicit_label, _tagged_label], opts)
def _explicit_label(opts):
return opts.get("label")
def _tagged_label(opts):
try:
tag = opts["tag"]
except KeyError:
return None
else:
return "%s ${default_label}" % tag
def _init_output_scalars(run, opts):
config = opts.get("output_scalars", summary.DEFAULT_OUTPUT_SCALARS)
if not config:
return None
abs_guild_path = os.path.abspath(run.guild_path())
return summary.OutputScalars(config, abs_guild_path)
def _write_proc_lock(run):
op_util.write_proc_lock(os.getpid(), run)
def _finalize_run(run, exit_status):
run.write_attr("exit_status", exit_status)
run.write_attr("stopped", runlib.timestamp())
op_util.delete_proc_lock(run)
def runs(**kw):
runs = runs_impl.filtered_runs(_runs_cmd_args(**kw))
data, cols = _format_runs(runs)
return RunsDataFrame(data=data, columns=cols)
def _runs_cmd_args(
operations=None,
labels=None,
tags=None,
comments=None,
running=False,
completed=False,
error=False,
terminated=False,
pending=False,
staged=False,
unlabeled=None,
marked=False,
unmarked=False,
started=None,
digest=None,
deleted=None,
remote=None,
):
operations = operations or ()
labels = labels or ()
tags = tags or ()
comments = comments or ()
return click_util.Args(
filter_ops=operations,
filter_labels=labels,
filter_tags=tags,
filter_comments=comments,
status_running=running,
status_completed=completed,
status_error=error,
status_terminated=terminated,
status_pending=pending,
status_staged=staged,
filter_unlabeled=unlabeled,
filter_marked=marked,
filter_unmarked=unmarked,
filter_started=started,
filter_digest=digest,
deleted=deleted,
remote=remote,
)
def _format_runs(runs):
cols = (
"run",
"operation",
"started",
"status",
"label",
)
data = [_format_run(run, cols) for run in runs]
return data, cols
def _format_run(run, cols):
fmt = run_util.format_run(run)
return [_run_attr(run, name, fmt) for name in cols]
def _run_attr(run, name, fmt):
if name == "run":
return RunIndex(run, fmt)
elif name in ("operation",):
return fmt[name]
elif name in ("started", "stopped"):
return _datetime(run.get(name))
elif name in ("label",):
return run.get(name, "")
elif name == "time":
return _run_time(run)
else:
return getattr(run, name)
def _datetime(ts):
if ts is None:
return None
return datetime.datetime.fromtimestamp(int(ts / 1000000))
def _run_time(run):
formatted_time = util.format_duration(run.get("started"), run.get("stopped"))
return pd.to_timedelta(formatted_time)
def _print_run_info(item, output=False, scalars=False):
for name in RUN_DETAIL:
print("%s: %s" % (name, item.fmt.get(name, "")))
print("flags:", end="")
print(run_util.format_attr(item.value.get("flags", "")))
if scalars:
print("scalars:")
for s in indexlib.iter_run_scalars(item.value):
print(" %s: %f (step %i)" % (s["tag"], s["last_val"], s["last_step"]))
if output:
print("output:")
for line in run_util.iter_output(item.value):
print(" %s" % line, end="")
def _runs_scalars(runs):
data = []
cols = [
"run",
"prefix",
"tag",
"first_val",
"first_step",
"last_val",
"last_step",
"min_val",
"min_step",
"max_val",
"max_step",
"avg_val",
"count",
"total",
]
for run in runs:
for s in indexlib.iter_run_scalars(run):
data.append(s)
return pd.DataFrame(data, columns=cols)
def _runs_scalars_detail(runs):
from guild import tfevent
data = []
cols = [
"run",
"path",
"tag",
"val",
"step",
]
for run in runs:
for path, _run_id, scalars in tfevent.scalar_readers(run.dir):
rel_path = os.path.relpath(path, run.dir)
for tag, val, step in scalars:
data.append([run, rel_path, tag, val, step])
return pd.DataFrame(data, columns=cols)
def _runs_flags(runs):
data = [_run_flags_data(run) for run in runs]
return pd.DataFrame(data)
def _run_flags_data(run):
data = run.get("flags") or {}
data[_run_flags_key(data)] = run.id
return data
def _run_flags_key(flag_vals):
run_key = "run"
while run_key in flag_vals:
run_key = "_" + run_key
return run_key
def _runs_compare(items):
core_cols = ["run", "operation", "started", "time", "status", "label"]
flag_cols = set()
scalar_cols = set()
data = []
for item in items:
row_data = {}
data.append(row_data)
# Order matters here - we want flag vals to take precedence
# over scalar vals with the same name.
_apply_scalar_data(item.value, scalar_cols, row_data)
_apply_flag_data(item.value, flag_cols, row_data)
_apply_run_core_data(item, core_cols, row_data)
cols = core_cols + sorted(flag_cols) + _sort_scalar_cols(scalar_cols, flag_cols)
return pd.DataFrame(data, columns=cols)
def _apply_scalar_data(run, cols, data):
for name, val in _run_scalar_data(run).items():
cols.add(name)
data[name] = val
def _run_scalar_data(run):
data = {}
step = None
last_step = None
for s in indexlib.iter_run_scalars(run):
key = s["tag"]
data[key] = s["last_val"]
last_step = s["last_step"]
if key == "loss":
step = last_step
if data:
if step is None:
step = last_step
data["step"] = step
return data
def _apply_flag_data(run, cols, data):
for name, val in _run_flags_data(run).items():
if name == "run":
continue
cols.add(name)
data[name] = val
def _apply_run_core_data(item, cols, data):
for name in cols:
data[name] = _run_attr(item.value, name, item.fmt)
def _sort_scalar_cols(scalar_cols, flag_cols):
# - List step first if it exists
# - Don't include flag cols in result
cols = []
if "step" in scalar_cols:
cols.append("step")
for col in sorted(scalar_cols):
if col == "step" or col in flag_cols:
continue
cols.append(col)
return cols
def guild_home():
return config.guild_home()
def set_guild_home(path):
config.set_guild_home(path)
|
Snippersgate have a dedicated team of finance experts waiting to help with your enquiry. Our specialist lenders include Santander Consumer Finance, Close Motor Finance, Alphera, Barclaycard, Mann Island Finance and MotoNovo. We are able to offer lower rates than high street banks.
With fast response times and enthusiastic and expert assistance, we can help you choose between Hire Purchase, Lease Purchase and PCP Deals, with terms from 12 to 60 months to provide a bespoke finance package.
We are able to beat any quote from Zuto, or Car Finance 24/7.
|
import glob
import sys
import logging
import datetime
import pandas as pd
from os import path, makedirs, rename
from influxdb import DataFrameClient
from time import gmtime
from parsing import bin_to_df
from bdas.settings import DATABASE, BIN_DIR, PROCESSED_DIR, UNPROCESSED_DIR, LOG_DIR, LOG_FILE, MASK
def bin_to_influx(bin_filename, last_date):
df, metadata, status = bin_to_df.bin_to_df(bin_filename)
if status == 0:
df2 = df[df.index > last_date]
if df2.size > 0:
for col in df2.columns:
df3 = pd.DataFrame({'date': df2[col].index, 'value': df2[col].values, 'sensor': col,
'das': metadata['NetId']})
df3.set_index('date', inplace=True)
client.write_points(df3, 'measurement', {'sensor': metadata['NetId'] + '-' + col})
return status
if __name__ == "__main__":
i = 1
status = None
log_path = path.join(BIN_DIR, LOG_DIR)
if not path.exists(log_path):
makedirs(log_path)
processed_path = path.join(BIN_DIR, PROCESSED_DIR)
if not path.exists(processed_path):
makedirs(processed_path)
logging_level = logging.DEBUG
logging.Formatter.converter = gmtime
log_format = '%(asctime)-15s %(levelname)s:%(message)s'
logging.basicConfig(format=log_format, datefmt='%Y/%m/%d %H:%M:%S UTC', level=logging_level,
handlers=[logging.FileHandler(path.join(BIN_DIR, LOG_DIR, LOG_FILE)), logging.StreamHandler()])
logging.info('_____ Started _____')
if len(sys.argv) > 1:
if len(sys.argv) % 2 == 1:
while i < len(sys.argv)-1:
if sys.argv[i] == 'MASK':
MASK = str(sys.argv[i+1])
elif sys.argv[i] == 'binpath':
BIN_DIR = str(sys.argv[i+1])
elif sys.argv[i] == 'dbname':
DATABASE['NAME'] = str(sys.argv[i+1])
else:
logging.warning('*** Unknown argument : ' + sys.argv[i])
pass
i += 2
else:
logging.error('*** Parsing failed : arguments should be given by pairs [key value]...')
status = 2
logging.info('_____ Ended _____')
sys.exit(status)
else:
logging.warning('*** No argument found...')
bin_filenames = sorted(glob.iglob(BIN_DIR+MASK+'.bin'))
logging.info('%d bin files to process...' % len(bin_filenames))
if len(bin_filenames) > 0:
client = DataFrameClient(DATABASE['HOST'], DATABASE['PORT'], DATABASE['USER'],
DATABASE['PASSWORD'], DATABASE['NAME'])
for f in bin_filenames:
metadata = bin_to_df.get_metadata(f)
if metadata is not None:
if metadata['NetId'] is not None:
net_id = metadata['NetId']
first_channel = metadata['Channels'][0]
tag_to_search = net_id + '-' + first_channel
last_measurement = client.query('select last(*) from "measurement" where "sensor"= tag_to_search ;')
if not last_measurement:
ld = datetime.datetime(1970, 1, 1, 0, 0, 0).replace(tzinfo=datetime.timezone.utc)
else:
ld = last_measurement['measurement'].index.to_pydatetime()[0]
status = bin_to_influx(f, ld)
if status == 0 or status == 1:
rename(f, path.join(path.dirname(f), PROCESSED_DIR, path.basename(f)))
rename(f + '.jsn', path.join(path.dirname(f), PROCESSED_DIR, path.basename(f) + '.jsn'))
else:
logging.warning('%s could not be processed...' % f)
if not path.exists(path.join(BIN_DIR, UNPROCESSED_DIR)):
makedirs(path.join(BIN_DIR, UNPROCESSED_DIR))
rename(f, path.join(path.dirname(f), UNPROCESSED_DIR, path.basename(f)))
rename(f + '.jsn', path.join(path.dirname(f), UNPROCESSED_DIR, path.basename(f) + '.jsn'))
else:
logging.warning('%s could not be processed because NetID is null' % f)
if not path.exists(path.join(BIN_DIR, UNPROCESSED_DIR)):
makedirs(path.join(BIN_DIR, UNPROCESSED_DIR))
rename(f, path.join(path.dirname(f), UNPROCESSED_DIR, path.basename(f)))
rename(f + '.jsn', path.join(path.dirname(f), UNPROCESSED_DIR, path.basename(f) + '.jsn'))
else:
status = 1
logging.warning('No files to process...')
logging.info('_____ Ended _____')
sys.exit(status)
|
Rejuvenate your upholstery with our Spotless Upholstery Cleaning Blackburn.Our Couch Cleaners thoroughly steam clean your lounge or sofas. We clean fabric, leather sofas, chairs and any furniture. Book today & get 10% Discount on Couch Leather or Fabric Protection Services.
Upholstery Cleaning Blackburn – provide professional fabric & leather couch steam cleaning, sofa cleaning, lounge cleaning and protection. Our Sofa Cleaners are Specialist in furniture stain removal & dining chairs cleaning. Call 1800 044 929 Spotless Upholstery Cleaning for Healthier and more hygienic upholstery!
Welcome to Spotless Upholstery Cleaning Blackburn – a 20 year old company dealing in quality upholstery cleaning services. We are known for our assured results, quality upholstery cleaning, and exceptional customer service. Spotless Upholstery Cleaning offers flawless domestic upholstery cleaning and commercial upholstery cleaning solutions at highly affordable prices in all suburbs of Blackburn.
Does not your upholstery deserve the best kind of cleaning? Call us to experience an out-of-the-world upholstery cleaning service in Blackburn!
Who would not like their favorite upholstery to last as long as possible? If you also want that then don’t think twice. Just get in touch with Spotless Couch Cleaning Blackburn and get the best kind of cleaning for your upholstery!
Spotless Upholstery Cleaning Blackburn offers the finest cleaning services for your precious leather upholstery. We have a special team for leather upholstery cleaning. We have unique cleaning solutions and methods that are designed to cater to the need of your expensive leather upholstery. Our leather upholstery cleaning service not just cleans the upholstery but adds lustre and glow as well. It makes your leather upholstery shine just like new again.
Spotless Upholstery Cleaning is not a big name in upholstery cleaning Blackburn for nothing, there are numerous reasons behind our popularity. And one of the major reasons include we clean all types of sofas and couches including recliner sofa. With the help of best skills, appropriate knowledge and advanced tools, we thoroughly clean your upholstery. No matter how complex the designs of your recliners are, we can clean and restore them to new. Unlike other upholstery cleaning companies, we do not charge you an arm and a leg for cleaning your recliner sofa. Our quality, affordability and availability are basics that make us a popular choice among the locals in Blackburn.
Professional couch cleaners of our team in Blackburn uses the best methods to clean and restore your couches. Before starting the couch cleaning process at home, we inspect your upholstery to look for stain and its condition. Not every type of upholstery and couches can be cleaned in the way, hence, inspection of the couches in necessary. Once, we are done with the investigation our cleaning team proceeds with the cleaning process. And couch dry cleaning is one such cleaning method that helps us bring your upholstery back to life.
Giving your upholstery a new life is a piece of cake now. All you have to do is call Spotless Upholstery Cleaning Blackburn! Call us for a free, no-obligation today!
The Spotless Upholstery Cleaning team consists of knowledgeable, certified, and highly trained upholstery cleaners who are experts in their jobs. In brief, the core ambition of our entire team is to please our customers and leave a smile on their face once we have completed the job. Hence, Our cleaners reside in different suburbs of Blackburn to ensure quick deliverance of cleaning services to all.
However, Rhys delivers excellence in upholstery cleaning in the Southern suburbs of Blackburn.
Moreover, Spotless Upholstery Cleaning Blackburn works with a single goal in mind – to make all homes and offices in Blackburn cleaner and healthier. Therefore, As upholstery forms an essential and inevitable part of every home and office, we have expert upholstery cleaning services available at lowest prices.
Thus, Call us to avail any of our numerous upholstery cleaning services available throughout Blackburn.
Hence, clean your upholstery at home and office by opting for our cost-effective upholstery cleaning solutions. Thus, Call Spotless Upholstery Cleaning Blackburn and switch to an unadulterated tomorrow!
|
import attr
from attr import attrib, attrs
from firefed.feature import Feature, formatter
from firefed.output import out
from firefed.util import tabulate
@attrs
class Permissions(Feature):
"""List host permissions (e.g. location sharing).
This feature extracts the stored permissions which the user has granted to
particular hosts (e.g. popups, location sharing, desktop notifications).
"""
perms = attrib(default=None, init=False)
def prepare(self):
self.perms = self.load_sqlite(
db='permissions.sqlite',
table='moz_perms',
cls=attr.make_class('Permission', ['host', 'permission']),
column_map={'origin': 'host', 'type': 'permission'},
)
def summarize(self):
out('%d permissions found.' % len(list(self.perms)))
def run(self):
self.build_format()
@formatter('table', default=True)
def table(self):
rows = [attr.astuple(p) for p in self.perms]
tabulate(rows, headers=('Host', 'Permission'))
@formatter('csv')
def csv(self):
Feature.csv_from_items(self.perms)
|
Please forward this error screen to 163. To go above and beyond the call of duty, to do more than is expected or asked of you, or to give extra effort in what you’re doing, to make sure your job is done perfectly. Hannity did not ขนาด ประตูฟุตบอล 5 คน for the jugular with this opening, but rather he explained the issue fairly objectively and pressed for an opinion. The Golden Globe Awards, honoring excellence in film and television, are presented by the Hollywood Foreign Press Association.
We regard “good governance” as such that should help countries to achieve sustainable and self-reliant development and social justice. See also: ไป, เคลื่อนไป, ออกไปถึง, มาถึง, Syn. See also: ทิ้ง, กำจัด, จบสิ้น, Syn. To take good pictures there are some knacks to learn and a little trick. Art was in its golden age in Venice during the Renaissance.
I’ve got to find the cause quickly, get out of this slump and live up to Mr. What’s the technical terminology for assisting someone to go to the toilet with a urine bottle? I’m going to be singing a capella at a friend’s wedding ceremony. The mirror on a compact I got from a friend has cracked. Once things start going this way, in the end they’ll all be much of a muchness.
The Mexican government announced the banning of all imports of second-hand cars, except for 1998 models. See also: strength, advantage, benefit, Syn. See also: good spot, good location, Syn. See also: go to a summer resort, Syn.
|
"""
# Licensed to the Apache Software Foundation (ASF) under one *
# or more contributor license agreements. See the NOTICE file *
# distributed with this work for additional information *
# regarding copyright ownership. The ASF licenses this file *
# to you under the Apache License, Version 2.0 (the *
# "License"); you may not use this file except in compliance *
# with the License. You may obtain a copy of the License at *
# *
# http://www.apache.org/licenses/LICENSE-2.0 *
# *
# Unless required by applicable law or agreed to in writing, *
# software distributed under the License is distributed on an *
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY *
# KIND, either express or implied. See the License for the *
# specific language governing permissions and limitations *
# under the License.
"""
from __future__ import absolute_import
from ..msg.Field import *
from ..msg.ImportExportHelper import *
from ..msg.Message import *
from ..msg.StructValue import *
from ..msg.Type import *
from ..msg.ValueFactory import *
from ..util.DateSerializer import *
from ..util.ListSerializer import *
from ..util.MapSerializer import *
from ..util.SetSerializer import *
from .Validator_RuntimeException import *
from .Validator_long import *
class DefaultValueFactory(ValueFactory):
"""
Default implementation of ValueFactory which provides some
dynamic type and field support, as well as standard value
conversions and import and rt.
"""
# Names
ETCH_RUNTIME_EXCEPTION_TYPE_NAME = "_Etch_RuntimeException"
ETCH_LIST_TYPE_NAME = "_Etch_List"
ETCH_MAP_TYPE_NAME = "_Etch_Map"
ETCH_SET_TYPE_NAME = "_Etch_Set"
ETCH_DATETIME_TYPE_NAME = "_Etch_Datetime"
ETCH_AUTH_EXCEPTION_TYPE_NAME = "_Etch_AuthException"
ETCH_EXCEPTION_MESSAGE_NAME = "_exception"
MSG_FIELD_NAME = "msg"
MESSAGE_ID_FIELD_NAME = "_messageId"
IN_REPLY_TO_FIELD_NAME = "_inReplyTo"
RESULT_FIELD_NAME = "result"
# Fields
_mf_msg = Field(MSG_FIELD_NAME)
"""The msg field of the standard unchecked exception"""
_mf__messageId = Field(MESSAGE_ID_FIELD_NAME)
"""The well-known _messageId field"""
_mf__inReplyTo = Field(IN_REPLY_TO_FIELD_NAME)
"""The well-known _inReplyTo field"""
_mf_result = Field(RESULT_FIELD_NAME)
"""The well-known result field"""
@staticmethod
def init(typs, class2type):
"""
Initializes the standard types and fields needed by all
etch generated value factories.
@param types
@param class2type
"""
cls = DefaultValueFactory
RuntimeExceptionSerialzier.init(typs[cls.ETCH_RUNTIME_EXCEPTION_TYPE_NAME], class2type)
ListSerialzier.init(typs[cls.ETCH_LIST_TYPE_NAME], class2type)
MapSerialzier.init(typs[cls.ETCH_MAP_TYPE_NAME], class2type)
SetSerialzier.init(typs[cls.ETCH_SET_TYPE_NAME], class2type)
DateSerialzier.init(typs[cls.ETCH_DATETIME_TYPE_NAME], class2type)
AuthExceptionSerialzier.init(typs[cls.ETCH_AUTH_EXCEPTION_TYPE_NAME], class2type)
# _mt__Etch_AuthException
t = typs.get(cls.ETCH_EXCEPTION_MESSAGE_NAME)
t.putValidator( cls._mf_result, Validator_RuntimeException.get())
t.putValidator( cls._mf__messageId, Validator_long.get(0))
t.putValidator( cls._mf__inReplyTo, Validator_long.get(0))
def __init__(self, typs, class2type):
"""
Constructs the DefaultValueFactory.
@param typs
@param class2type
"""
cls = self.__class__
self.__types = typs
self.__class2type = class2type
self._mt__Etch_RuntimeException = typs.get(cls.ETCH_RUNTIME_EXCEPTION_TYPE_NAME)
self._mt__Etch_List = typs.get(cls.ETCH_LIST_TYPE_NAME)
self._mt__Etch_Map = typs.get(cls.ETCH_MAP_TYPE_NAME)
self._mt__Etch_Set = typs.get(cls.ETCH_SET_TYPE_NAME)
self._mt__Etch_Datetime = typs.get(cls.ETCH_DATETIME_TYPE_NAME)
self._mt__Etch_AuthException = typs.get(cls.ETCH_AUTH_EXCEPTION_TYPE_NAME)
self._mt__exception = typs.get(cls.ETCH_EXCEPTION_MESSAGE_NAME)
def get_mt__Etch_RuntimeException(self):
return _mt__Etch_RuntimeException
|
The Sentinel Alloy frameset is the perfect starting point to create the custom bike that suits your riding style. The heat treated hydroformed alloy frameset is perfect for those who want durability and strength at a great value. The alloy frame shares the exact same geometry and suspension setup as the carbon version. For rear suspension, Transition worked tirelessly with Fox to create the perfect tune with the extremely impressive DPX2 rear shock. Amazing small bump sensitivity like a coil shock and excellent mid-stroke support and bottom out control. The Sentinel Alloy Frameset features a threaded BB, external rear brake routing for easy brake maintenance, full water bottle storage inside the main triangle, boost rear end and rubber molded frame protection.
|
""" Tools for Sphinx to build docs and/or websites.
"""
import os
import os.path as op
import sys
import shutil
if sys.version_info[0] < 3:
input = raw_input # noqa
def sh(cmd):
"""Execute command in a subshell, return status code."""
return subprocess.check_call(cmd, shell=True)
def sh2(cmd):
"""Execute command in a subshell, return stdout.
Stderr is unbuffered from the subshell."""
p = subprocess.Popen(cmd, stdout=subprocess.PIPE, shell=True)
out = p.communicate()[0]
retcode = p.returncode
if retcode:
raise subprocess.CalledProcessError(retcode, cmd)
else:
return out.rstrip().decode('utf-8', 'ignore')
def sphinx_clean(build_dir):
if op.isdir(build_dir):
shutil.rmtree(build_dir)
os.mkdir(build_dir)
print('Cleared build directory.')
def sphinx_build(src_dir, build_dir):
import sphinx
try:
ret = 0
ret = sphinx.main(['sphinx-build', # Dummy
'-b', 'html',
'-d', op.join(build_dir, 'doctrees'),
src_dir, # Source
op.join(build_dir, 'html'), # Dest
])
except SystemExit:
pass
if ret != 0:
raise RuntimeError('Sphinx error: %s' % ret)
print("Build finished. The HTML pages are in %s/html." % build_dir)
def sphinx_show(html_dir):
index_html = op.join(html_dir, 'index.html')
if not op.isfile(index_html):
sys.exit('Cannot show pages, build the html first.')
import webbrowser
webbrowser.open_new_tab(index_html)
def sphinx_copy_pages(html_dir, pages_dir, pages_repo):
print('COPYING PAGES')
# Create the pages repo if needed
if not op.isdir(pages_dir):
os.chdir(ROOT_DIR)
sh("git clone %s %s" % (pages_repo, pages_dir))
# Ensure that its up to date
os.chdir(pages_dir)
sh('git checkout master -q')
sh('git pull -q')
os.chdir('..')
# This is pretty unforgiving: we unconditionally nuke the destination
# directory, and then copy the html tree in there
tmp_git_dir = op.join(ROOT_DIR, pages_dir + '_git')
shutil.move(op.join(pages_dir, '.git'), tmp_git_dir)
try:
shutil.rmtree(pages_dir)
shutil.copytree(html_dir, pages_dir)
shutil.move(tmp_git_dir, op.join(pages_dir, '.git'))
finally:
if op.isdir(tmp_git_dir):
shutil.rmtree(tmp_git_dir)
# Copy individual files
open(op.join(pages_dir, 'README.md'), 'wb').write(
'Autogenerated website - do not edit\n'.encode('utf-8'))
for fname in ['CNAME', '.nojekyll']: # nojekyll or website wont work
if op.isfile(op.join(WEBSITE_DIR, fname)):
shutil.copyfile(op.join(WEBSITE_DIR, fname),
op.join(pages_dir, fname))
# Messages
os.chdir(pages_dir)
sh('git status')
print()
print("Website copied to _gh-pages. Above you can see its status:")
print(" Run 'make website show' to view.")
print(" Run 'make website upload' to commit and push.")
def sphinx_upload(repo_dir):
# Check head
os.chdir(repo_dir)
status = sh2('git status | head -1')
branch = re.match('On branch (.*)$', status).group(1)
if branch != 'master':
e = 'On %r, git branch is %r, MUST be "master"' % (repo_dir,
branch)
raise RuntimeError(e)
# Show repo and ask confirmation
print()
print('You are about to commit to:')
sh('git config --get remote.origin.url')
print()
print('Most recent 3 commits:')
sys.stdout.flush()
sh('git --no-pager log --oneline -n 3')
ok = input('Are you sure you want to commit and push? (y/[n]): ')
ok = ok or 'n'
# If ok, add, commit, push
if ok.lower() == 'y':
sh('git add .')
sh('git commit -am"Update (automated commit)"')
print()
sh('git push')
|
At Custom Homes Guys, we're here to satisfy your needs when it comes to Custom Homes in Angela, MT. We've got a staff of professional contractors and the most impressive solutions in the market to deliver just what you're looking for. We are going to apply first-rate products and money conserving strategies to ensure that you'll get the most effective services at the greatest value. Call at 888-472-8401 to start out.
Economizing is a vital part of any work. At the same time, you want the best and finest quality of work for Custom Homes in Angela, MT. We will be certain that our cash conserving efforts don't indicate a lower standard of quality. We make use of the leading products and techniques to make sure that the task will tolerate the years, and we help you save money with practices that will not change the excellence of your job. For example, we are thorough to steer clear of expensive mistakes, work quickly to help save working hours, and be sure that you will enjoy the top discounts on products and labor. If you want to lower your expenses, Custom Homes Guys is the business to get in touch with. Dial 888-472-8401 to speak with our client service representatives, now.
With regards to Custom Homes in Angela, MT, you should be informed to make the very best judgments. You don't want to go in blindly, and you should know what to expect. You're not going to encounter any kind of unexpected situations if you hire Custom Homes Guys. The first step is to call by dialing 888-472-8401 to begin your job. Within this call, you'll get your concerns responded to, and we will schedule a time to initiate work. Our crew will show up at the arranged time with the appropriate supplies, and will work with you through the entire project.
When you are arranging a project for Custom Homes in Angela, MT, there are lots of good reasons to prefer Custom Homes Guys. We have the top customer support ratings, the very best quality products, and the most helpful and productive cash saving techniques. We are here to help you with the most skills and practical knowledge in the field. When you need Custom Homes in Angela, contact Custom Homes Guys by dialing 888-472-8401, and we're going to be beyond pleased to help you.
|
# -*- coding: utf-8 -*-
#
# This file is part of PyBuilder
#
# Copyright 2011-2020 PyBuilder Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from os.path import pathsep
import unittest
from pybuilder.core import Project
from pybuilder.errors import BuildFailedException
from pybuilder.plugins.python.cram_plugin import (_cram_command_for,
_find_files,
_report_file,
run_cram_tests,
)
from pybuilder.utils import jp, np
from test_utils import patch, Mock, call
class CramPluginTests(unittest.TestCase):
def test_command_respects_no_verbose(self):
project = Project('.')
project.set_property('verbose', False)
expected = ['-m', 'cram', '-E']
received = _cram_command_for(project)
self.assertEqual(expected, received)
def test_command_respects_verbose(self):
project = Project('.')
project.set_property('verbose', True)
expected = ['-m', 'cram', '-E', '--verbose']
received = _cram_command_for(project)
self.assertEqual(expected, received)
@patch('pybuilder.plugins.python.cram_plugin.discover_files_matching')
def test_find_files(self, discover_mock):
project = Project('.')
project.set_property('dir_source_cmdlinetest', np('any/dir'))
project.set_property('cram_test_file_glob', '*.t')
expected = [np(jp(project.basedir, './any/dir/test.cram'))]
discover_mock.return_value = expected
received = _find_files(project)
self.assertEqual(expected, received)
discover_mock.assert_called_once_with(np('any/dir'), '*.t')
def test_report(self):
project = Project('.')
project.set_property('dir_reports', np('any/dir'))
expected = np(jp(project.basedir, 'any/dir/cram.err'))
received = _report_file(project)
self.assertEqual(expected, received)
@patch('pybuilder.plugins.python.cram_plugin._cram_command_for')
@patch('pybuilder.plugins.python.cram_plugin._find_files')
@patch('pybuilder.plugins.python.cram_plugin._report_file')
@patch('pybuilder.plugins.python.cram_plugin.read_file')
def test_running_plugin_cram_from_target(self,
read_file_mock,
report_mock,
find_files_mock,
command_mock
):
project = Project('.')
project.set_property('cram_run_test_from_target', True)
project.set_property('dir_dist', 'python')
project.set_property('dir_dist_scripts', 'scripts')
project.set_property('verbose', False)
project._plugin_env = {}
logger = Mock()
reactor = Mock()
reactor.python_env_registry = {}
reactor.python_env_registry["pybuilder"] = pyb_env = Mock()
reactor.pybuilder_venv = pyb_env
pyb_env.environ = {}
pyb_env.executable = ["a/b"]
execute_mock = pyb_env.execute_command = Mock()
command_mock.return_value = ['cram']
find_files_mock.return_value = ['test1.cram', 'test2.cram']
report_mock.return_value = 'report_file'
read_file_mock.return_value = ['test failes for file', '# results']
execute_mock.return_value = 0
run_cram_tests(project, logger, reactor)
execute_mock.assert_called_once_with(
['a/b', 'cram', 'test1.cram', 'test2.cram'], 'report_file',
error_file_name='report_file',
env={'PYTHONPATH': np(jp(project.basedir, 'python')) + pathsep,
'PATH': np(jp(project.basedir, 'python/scripts')) + pathsep}
)
expected_info_calls = [call('Running Cram command line tests'),
call('Cram tests were fine'),
call('results'),
]
self.assertEqual(expected_info_calls, logger.info.call_args_list)
@patch('pybuilder.plugins.python.cram_plugin._cram_command_for')
@patch('pybuilder.plugins.python.cram_plugin._find_files')
@patch('pybuilder.plugins.python.cram_plugin._report_file')
@patch('pybuilder.plugins.python.cram_plugin.read_file')
def test_running_plugin_from_scripts(self,
read_file_mock,
report_mock,
find_files_mock,
command_mock
):
project = Project('.')
project.set_property('cram_run_test_from_target', False)
project.set_property('dir_source_main_python', 'python')
project.set_property('dir_source_main_scripts', 'scripts')
project.set_property('verbose', False)
project._plugin_env = {}
logger = Mock()
reactor = Mock()
reactor.python_env_registry = {}
reactor.python_env_registry["pybuilder"] = pyb_env = Mock()
reactor.pybuilder_venv = pyb_env
pyb_env.environ = {}
pyb_env.executable = ["a/b"]
execute_mock = pyb_env.execute_command = Mock()
command_mock.return_value = ['cram']
find_files_mock.return_value = ['test1.cram', 'test2.cram']
report_mock.return_value = 'report_file'
read_file_mock.return_value = ['test fails for file', '# results']
execute_mock.return_value = 0
run_cram_tests(project, logger, reactor)
execute_mock.assert_called_once_with(
['a/b', 'cram', 'test1.cram', 'test2.cram'], 'report_file',
error_file_name='report_file',
env={'PYTHONPATH': np(jp(project.basedir, 'python')) + pathsep,
'PATH': np(jp(project.basedir, 'scripts')) + pathsep}
)
expected_info_calls = [call('Running Cram command line tests'),
call('Cram tests were fine'),
call('results'),
]
self.assertEqual(expected_info_calls, logger.info.call_args_list)
@patch('pybuilder.plugins.python.cram_plugin.tail_log')
@patch('pybuilder.plugins.python.cram_plugin._cram_command_for')
@patch('pybuilder.plugins.python.cram_plugin._find_files')
@patch('pybuilder.plugins.python.cram_plugin._report_file')
@patch('pybuilder.plugins.python.cram_plugin.read_file')
def test_running_plugin_fails(self,
read_file_mock,
report_mock,
find_files_mock,
command_mock,
tail_mock,
):
project = Project('.')
project.set_property('verbose', False)
project.set_property('dir_source_main_python', 'python')
project.set_property('dir_source_main_scripts', 'scripts')
logger = Mock()
reactor = Mock()
reactor.python_env_registry = {}
reactor.python_env_registry["pybuilder"] = pyb_env = Mock()
reactor.pybuilder_venv = pyb_env
pyb_env.environ = {}
pyb_env.executable = ["a/b"]
execute_mock = pyb_env.execute_command = Mock()
command_mock.return_value = ['cram']
find_files_mock.return_value = ['test1.cram', 'test2.cram']
report_mock.return_value = 'report_file'
read_file_mock.return_value = ['test failes for file', '# results']
execute_mock.return_value = 1
tail_mock.return_value = "tail data"
self.assertRaises(
BuildFailedException, run_cram_tests, project, logger, reactor)
execute_mock.assert_called_once_with(
['a/b', 'cram', 'test1.cram', 'test2.cram'], 'report_file',
error_file_name='report_file',
env={'PYTHONPATH': np(jp(project.basedir, 'python')) + pathsep,
'PATH': np(jp(project.basedir, 'scripts')) + pathsep}
)
expected_info_calls = [call('Running Cram command line tests'),
]
expected_error_calls = [call('Cram tests failed! See report_file for full details:\ntail data'),
]
self.assertEqual(expected_info_calls, logger.info.call_args_list)
self.assertEqual(expected_error_calls, logger.error.call_args_list)
@patch('pybuilder.plugins.python.cram_plugin._cram_command_for')
@patch('pybuilder.plugins.python.cram_plugin._find_files')
@patch('pybuilder.plugins.python.cram_plugin._report_file')
@patch('pybuilder.plugins.python.cram_plugin.read_file')
def test_running_plugin_no_failure_no_tests(self,
read_file_mock,
report_mock,
find_files_mock,
command_mock
):
project = Project('.')
project.set_property('verbose', True)
project.set_property('dir_source_main_python', 'python')
project.set_property('dir_source_main_scripts', 'scripts')
project.set_property("cram_fail_if_no_tests", False)
project._plugin_env = {}
logger = Mock()
reactor = Mock()
reactor.python_env_registry = {}
reactor.python_env_registry["pybuilder"] = pyb_env = Mock()
reactor.pybuilder_venv = pyb_env
pyb_env.environ = {}
pyb_env.executable = ["a/b"]
execute_mock = pyb_env.execute_command = Mock()
command_mock.return_value = ['cram']
find_files_mock.return_value = []
report_mock.return_value = 'report_file'
read_file_mock.return_value = ['test failes for file', '# results']
execute_mock.return_value = 1
run_cram_tests(project, logger, reactor)
execute_mock.assert_not_called()
expected_info_calls = [call('Running Cram command line tests'),
]
self.assertEqual(expected_info_calls, logger.info.call_args_list)
@patch('pybuilder.plugins.python.cram_plugin._cram_command_for')
@patch('pybuilder.plugins.python.cram_plugin._find_files')
@patch('pybuilder.plugins.python.cram_plugin._report_file')
@patch('pybuilder.plugins.python.cram_plugin.read_file')
def test_running_plugin_failure_no_tests(self,
read_file_mock,
report_mock,
find_files_mock,
command_mock
):
project = Project('.')
project.set_property('verbose', True)
project.set_property('dir_source_main_python', 'python')
project.set_property('dir_source_main_scripts', 'scripts')
project.set_property("cram_fail_if_no_tests", True)
project._plugin_env = {}
logger = Mock()
reactor = Mock()
reactor.python_env_registry = {}
reactor.python_env_registry["pybuilder"] = pyb_env = Mock()
pyb_env.environ = {}
execute_mock = pyb_env.execute_command = Mock()
command_mock.return_value = ['cram']
find_files_mock.return_value = []
report_mock.return_value = 'report_file'
read_file_mock.return_value = ['test failes for file', '# results']
execute_mock.return_value = 1
self.assertRaises(
BuildFailedException, run_cram_tests, project, logger, reactor)
execute_mock.assert_not_called()
expected_info_calls = [call('Running Cram command line tests'),
]
self.assertEqual(expected_info_calls, logger.info.call_args_list)
|
Take years off your age and restore confidence in your smile!
Have your pearly whites lost their lustre? Regular consumption of coffee, tea, cola and red wine can contribute to discolouration of your teeth. So too can tobacco, ageing, medication, poor oral hygiene and your genetic make-up. These stains and discolouration can age your smile, but fortunately there is a quick and easy way to reclaim naturally white teeth.
In as little as 90 minutes, we can restore confidence in your smile using the safe and effective Smartbleach 3LT Green light laser teeth whitening system.
Smartbleach teeth whitening uses a unique and patented process that combines pure green light and a specially formulated red gel to create a photodynamic teeth whitening treatment that is highly effective yet gentle on the tooth enamel.
Enjoy the opportunity to sit back and relax with some music or a movie of your choice, while your dental professional performs your teeth whitening procedure. At the end of your session, you’ll receive a personal set of ‘Before & After’ photos and – most importantly – a brilliant smile!
Only Smartbleach can use the unique combination of pure green laser-like light and a scientifically engineered alkaline gel. The alkaline gel does not etch tooth enamel like traditional acid-based bleaching gels and the laser light avoids heating of the tooth-pulp like heat-lamp based systems meaning less sensitivity post-treatment. Assuming you look after your teeth, experience has shown that the benefits of Smartbleach can last for years.
For a smile that dazzles, contact the friendly team at Hills Dental Care for more information about teeth whitening.
|
# coding=utf-8
# COPYRIGHT
#
# All contributions by Raghavendra Kotikalapudi:
# Copyright (c) 2016, Raghavendra Kotikalapudi.
# All rights reserved.
#
# All other contributions:
# Copyright (c) 2016, the respective contributors.
# All rights reserved.
#
# Copyright (c) 2018 Google LLC
# All rights reserved.
#
# Each contributor holds copyright over their respective contributions.
# The project versioning (Git) records all such contribution source information.
#
# LICENSE
#
# The MIT License (MIT)
#
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
"""Model definitions for the R-network.
Forked from https://github.com/raghakot/keras-resnet/blob/master/resnet.py.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import gin
# pytype: disable=import-error
from tensorflow.keras import backend as K
from tensorflow.keras.activations import sigmoid
from tensorflow.keras.layers import Activation
from tensorflow.keras.layers import add
from tensorflow.keras.layers import AveragePooling2D
from tensorflow.keras.layers import BatchNormalization
from tensorflow.keras.layers import concatenate
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import Dot
from tensorflow.keras.layers import Flatten
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Lambda
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.models import Model
from tensorflow.keras.regularizers import l2
# pytype: enable=import-error
EMBEDDING_DIM = 512
TOP_HIDDEN = 4
def _bn_relu(inpt):
"""Helper to build a BN -> relu block."""
norm = BatchNormalization(axis=3)(inpt)
return Activation("relu")(norm)
def _conv_bn_relu(**conv_params):
"""Helper to build a conv -> BN -> relu block."""
filters = conv_params["filters"]
kernel_size = conv_params["kernel_size"]
strides = conv_params.setdefault("strides", (1, 1))
kernel_initializer = conv_params.setdefault("kernel_initializer", "he_normal")
padding = conv_params.setdefault("padding", "same")
kernel_regularizer = conv_params.setdefault("kernel_regularizer", l2(1.e-4))
def f(inpt):
conv = Conv2D(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
kernel_initializer=kernel_initializer,
kernel_regularizer=kernel_regularizer)(
inpt)
return _bn_relu(conv)
return f
def _bn_relu_conv(**conv_params):
"""Helper to build a BN -> relu -> conv block."""
# This is an improved scheme proposed in http://arxiv.org/pdf/1603.05027v2.pdf
filters = conv_params["filters"]
kernel_size = conv_params["kernel_size"]
strides = conv_params.setdefault("strides", (1, 1))
kernel_initializer = conv_params.setdefault("kernel_initializer", "he_normal")
padding = conv_params.setdefault("padding", "same")
kernel_regularizer = conv_params.setdefault("kernel_regularizer", l2(1.e-4))
def f(inpt):
activation = _bn_relu(inpt)
return Conv2D(
filters=filters,
kernel_size=kernel_size,
strides=strides,
padding=padding,
kernel_initializer=kernel_initializer,
kernel_regularizer=kernel_regularizer)(
activation)
return f
def _shortcut(inpt, residual):
"""Adds shortcut between inpt and residual block and merges with "sum"."""
# Expand channels of shortcut to match residual.
# Stride appropriately to match residual (width, height)
# Should be int if network architecture is correctly configured.
input_shape = K.int_shape(inpt)
residual_shape = K.int_shape(residual)
stride_width = int(round(input_shape[1] / residual_shape[1]))
stride_height = int(round(input_shape[2] / residual_shape[2]))
equal_channels = input_shape[3] == residual_shape[3]
shortcut = inpt
# 1 X 1 conv if shape is different. Else identity.
if stride_width > 1 or stride_height > 1 or not equal_channels:
shortcut = Conv2D(
filters=residual_shape[3],
kernel_size=(1, 1),
strides=(stride_width, stride_height),
padding="valid",
kernel_initializer="he_normal",
kernel_regularizer=l2(0.0001))(
inpt)
return add([shortcut, residual])
def _residual_block(block_function, filters, repetitions, is_first_layer=False):
"""Builds a residual block with repeating bottleneck blocks."""
def f(inpt):
"""Helper function."""
for i in range(repetitions):
init_strides = (1, 1)
if i == 0 and not is_first_layer:
init_strides = (2, 2)
inpt = block_function(
filters=filters,
init_strides=init_strides,
is_first_block_of_first_layer=(is_first_layer and i == 0))(
inpt)
return inpt
return f
def basic_block(filters,
init_strides=(1, 1),
is_first_block_of_first_layer=False):
"""Basic 3 X 3 convolution blocks for use on resnets with layers <= 34."""
# Follows improved proposed scheme in http://arxiv.org/pdf/1603.05027v2.pdf
def f(inpt):
"""Helper function."""
if is_first_block_of_first_layer:
# don't repeat bn->relu since we just did bn->relu->maxpool
conv1 = Conv2D(
filters=filters,
kernel_size=(3, 3),
strides=init_strides,
padding="same",
kernel_initializer="he_normal",
kernel_regularizer=l2(1e-4))(
inpt)
else:
conv1 = _bn_relu_conv(
filters=filters, kernel_size=(3, 3), strides=init_strides)(
inpt)
residual = _bn_relu_conv(filters=filters, kernel_size=(3, 3))(conv1)
return _shortcut(inpt, residual)
return f
def _bn_relu_for_dense(inpt):
norm = BatchNormalization(axis=1)(inpt)
return Activation("relu")(norm)
def _top_network(input_shape):
"""Add top classification layers.
Args:
input_shape: shape of the embedding of the input image.
Returns:
A model taking a batch of input image embeddings, returning a batch of
similarities (shape [batch, 2])
"""
x1 = Input(shape=input_shape, name="top_deep_net_x1")
x2 = Input(shape=input_shape, name="top_deep_net_x2")
x = concatenate([x1, x2])
raw_result = _bn_relu_for_dense(x)
for _ in range(TOP_HIDDEN):
raw_result = Dense(
units=EMBEDDING_DIM, kernel_initializer="he_normal")(
raw_result)
raw_result = _bn_relu_for_dense(raw_result)
output = Dense(
units=2, activation="softmax", kernel_initializer="he_normal")(
raw_result)
model = Model(inputs=[x1, x2], outputs=output)
model.summary()
return model
def _metric_top_network(input_shape):
"""A simple top network that basically computes sigmoid(dot_product(x1, x2)).
Args:
input_shape: shape of the embedding of the input image.
Returns:
A model taking a batch of input image embeddings, returning a batch of
similarities (shape [batch, 2])
"""
x1 = Input(shape=input_shape, name="top_metric_net_x1")
x2 = Input(shape=input_shape, name="top_metric_net_x2")
def one_hot_sigmoid(x):
return K.concatenate([1 - sigmoid(x), sigmoid(x)], axis=1)
dot_product = Dot(axes=-1)([x1, x2])
output = Lambda(one_hot_sigmoid)(dot_product)
model = Model(inputs=[x1, x2], outputs=output)
model.summary()
return model
class ResnetBuilder(object):
"""Factory class for creating Resnet models."""
@staticmethod
def build(input_shape, num_outputs, block_fn, repetitions, is_classification):
"""Builds a custom ResNet like architecture.
Args:
input_shape: The inpt shape in the form (nb_rows, nb_cols, nb_channels)
num_outputs: The number of outputs at final softmax layer
block_fn: The block function to use. This is either `basic_block` or
`bottleneck`. The original paper used basic_block for layers < 50
repetitions: Number of repetitions of various block units. At each block
unit, the number of filters are doubled and the inpt size is halved
is_classification: if True add softmax layer on top
Returns:
The keras `Model`.
The model's input is an image tensor. Its shape is [batch, height, width,
channels] if the backend is tensorflow.
The model's output is the embedding with shape [batch, num_outputs].
Raises:
Exception: wrong input shape.
"""
if len(input_shape) != 3:
raise Exception(
"Input shape should be a tuple (nb_rows, nb_cols, nb_channels)")
inpt = Input(shape=input_shape)
conv1 = _conv_bn_relu(filters=64, kernel_size=(7, 7), strides=(2, 2))(inpt)
pool1 = MaxPooling2D(
pool_size=(3, 3), strides=(2, 2), padding="same")(
conv1)
block = pool1
filters = 64
for i, r in enumerate(repetitions):
block = _residual_block(
block_fn, filters=filters, repetitions=r, is_first_layer=(i == 0))(
block)
filters *= 2
# Last activation
block = _bn_relu(block)
# Classifier block
block_shape = K.int_shape(block)
pool2 = AveragePooling2D(
pool_size=(block_shape[1], block_shape[2]),
strides=(1, 1))(
block)
flatten1 = Flatten()(pool2)
last_activation = None
if is_classification:
last_activation = "softmax"
dense = Dense(
units=num_outputs,
kernel_initializer="he_normal",
activation=last_activation)(
flatten1)
model = Model(inputs=inpt, outputs=dense)
model.summary()
return model
@staticmethod
def build_resnet_18(input_shape, num_outputs, is_classification=True):
"""Create Resnet-18."""
return ResnetBuilder.build(input_shape, num_outputs, basic_block,
[2, 2, 2, 2], is_classification)
@staticmethod
@gin.configurable
def build_siamese_resnet_18(input_shape,
use_deep_top_network=True,
trainable_bottom_network=True):
"""Create siamese architecture for R-network.
Args:
input_shape: Shape of the input images, (height, width, channels)
use_deep_top_network: If true (default), a deep network will be used for
comparing embeddings. Otherwise, we use a simple
distance metric.
trainable_bottom_network: Whether the bottom (embedding) model is
trainable.
Returns:
A tuple:
- The model mapping two images [batch, height, width, channels] to
similarities [batch, 2].
- The embedding model mapping one image [batch, height, width, channels]
to embedding [batch, EMBEDDING_DIM].
- The similarity model mapping two embedded images
[batch, 2*EMBEDDING_DIM] to similariries [batch, 2].
The returned models share weights. In particular, loading the weights of
the first model also loads the weights of the other two models.
"""
branch = ResnetBuilder.build_resnet_18(
input_shape, EMBEDDING_DIM, is_classification=False)
branch.trainable = trainable_bottom_network
x1 = Input(shape=input_shape, name="x1")
x2 = Input(shape=input_shape, name="x2")
y1 = branch(x1)
y2 = branch(x2)
if use_deep_top_network:
similarity_network = _top_network((EMBEDDING_DIM,))
else:
similarity_network = _metric_top_network((EMBEDDING_DIM,))
output = similarity_network([y1, y2])
return Model(inputs=[x1, x2], outputs=output), branch, similarity_network
|
(Medical Xpress)—Deakin University researchers have found a connection between poor dental health and depression.
Using data from a comprehensive health survey of more than 10,000 people aged 20—75 years living in the United States, the Deakin IMPACT Strategic Research Centre researchers found that poor dental health (as measured by the number of dental conditions a person had) increased the likelihood of being depressed.
"Not only did we find a connection between dental health and depression, we also demonstrated that a dose-response exists between the two conditions, meaning that the more dental conditions one had the greater the severity of their depression," said Deakin's Dr Adrienne O'Neil.
"This relationship held true even after accounting for other factors that could potentially explain the association, such as high body mass index and CRP, a protein that is often used as a general marker of inflammation in the body."
Depression is considered an inflammatory disorder, meaning that sources of inflammation such as bad dietary habits, being overweight or the presence of other medical conditions can contribute to the biological processes that induce mental disorders from a very early age. Poor dental health, which is a source of inflammation, has not been investigated extensively in the context of its links with mental health. The researchers therefore analysed the data from the National Health and Nutrition Examination Survey from the United States to investigate the possible connection.
They found almost two thirds of participants reporting depression (61 per cent) also reported having an aching mouth in the past year and more than half (57.4 per cent) considered their teeth to be in fair or poor condition.
While the researchers were able to demonstrate that depression is linked to poor dental health, through this study they were not able to determine why.
"The relationship between dental health and depression is not well understood, with previous studies investigating poor dental health as a by-product of depression, rather than a precursor," Dr O'Neil said.
"Although the results of this study provide only a snapshot of this association, they add to emerging theories around the importance of oral health and bacteria in mental health.
"This is an exciting area of research Deakin is exploring further with longitudinal data collected here in Australia. Specifically, we are currently conducting a study of how microbiota and the bacteria in the mouth, as well as the gut, may be related to inflammatory disease, including depression.
"If poor dental health is a risk factor for depression, this may have implications for depression management, as well as depression prevention from a public health perspective."
The results of this study are published in the online version of the journal General Hospital Psychiatry.
Could poor dental health signal a faltering mind?
Perhaps they have it back to front, and that poor dental health is a consequence of the patients' depression inhibiting good body-maintenance habits ??
|
"""
https://bitbucket.org/sulab/wikidatabots/src/4f2e4bdf3d7328eb6fd94cc67af61e194bda0a96/genes/orthologs/human/parseHomologene.py?at=dronetest_DiseaseBot&fileviewer=file-view-default
https://www.wikidata.org/wiki/Q14911732#P684
https://www.wikidata.org/wiki/Q18049645
homologene release 68
https://www.wikidata.org/wiki/Q20976936
"""
import argparse
import json
import os
from collections import defaultdict
from datetime import datetime
from tqdm import tqdm
from scheduled_bots import get_default_core_props, PROPS
from scheduled_bots.geneprotein import HelperBot
from scheduled_bots.geneprotein.Downloader import MyGeneDownloader
from wikidataintegrator import wdi_login, wdi_core, wdi_helpers
core_props = get_default_core_props()
try:
from scheduled_bots.local import WDUSER, WDPASS
except ImportError:
if "WDUSER" in os.environ and "WDPASS" in os.environ:
WDUSER = os.environ['WDUSER']
WDPASS = os.environ['WDPASS']
else:
raise ValueError("WDUSER and WDPASS must be specified in local.py or as environment variables")
__metadata__ = {'name': 'OrthologBot',
'maintainer': 'GSS',
'tags': ['gene', 'ortholog'],
}
def main(metadata, log_dir="./logs", fast_run=True, write=True):
"""
Main function for creating/updating genes
:param metadata: looks like: {"ensembl" : 84, "cpdb" : 31, "netaffy" : "na35", "ucsc" : "20160620", .. }
:type metadata: dict
:param log_dir: dir to store logs
:type log_dir: str
:param fast_run: use fast run mode
:type fast_run: bool
:param write: actually perform write
:type write: bool
:return: None
"""
# login
login = wdi_login.WDLogin(user=WDUSER, pwd=WDPASS)
wdi_core.WDItemEngine.setup_logging(log_dir=log_dir, logger_name='WD_logger', log_name=log_name,
header=json.dumps(__metadata__))
# get all ids mappings
entrez_wdid = wdi_helpers.id_mapper(PROPS['Entrez Gene ID'])
wdid_entrez = {v: k for k, v in entrez_wdid.items()}
homo_wdid = wdi_helpers.id_mapper(PROPS['HomoloGene ID'], return_as_set=True)
wdid_homo = dict()
for homo, wdids in homo_wdid.items():
for wdid in wdids:
wdid_homo[wdid] = homo
entrez_homo = {wdid_entrez[wdid]: homo for wdid, homo in wdid_homo.items() if wdid in wdid_entrez}
taxon_wdid = wdi_helpers.id_mapper(PROPS['NCBI Taxonomy ID'])
# only do certain records
mgd = MyGeneDownloader(q="_exists_:homologene AND type_of_gene:protein-coding",
fields=','.join(['taxid', 'homologene', 'entrezgene']))
docs, total = mgd.query()
docs = list(tqdm(docs, total=total))
records = HelperBot.tag_mygene_docs(docs, metadata)
# group together all orthologs
# d[taxid][entrezgene] = { set of entrezgene ids for orthologs }
d = defaultdict(lambda: defaultdict(set))
entrez_taxon = dict() # keep this for the qualifier on the statements
for doc in records:
this_taxid = doc['taxid']['@value']
this_entrez = doc['entrezgene']['@value']
entrez_taxon[str(this_entrez)] = str(this_taxid)
if str(this_entrez) not in entrez_wdid:
continue
for taxid, entrez in doc['homologene']['@value']['genes']:
if taxid == 4932 and this_taxid == 559292:
# ridiculous workaround because entrez has the taxid for the strain and homologene has it for the species
# TODO: This needs to be fixed if you want to use other things that may have species/strains .. ?`
continue
if taxid != this_taxid and str(entrez) in entrez_wdid:
d[str(this_taxid)][str(this_entrez)].add(str(entrez))
print("taxid: # of genes : {}".format({k: len(v) for k, v in d.items()}))
homogene_ver = metadata['homologene']
release = wdi_helpers.Release("HomoloGene build{}".format(homogene_ver), "Version of HomoloGene", homogene_ver,
edition_of_wdid='Q468215',
archive_url='ftp://ftp.ncbi.nih.gov/pub/HomoloGene/build{}/'.format(
homogene_ver)).get_or_create(login)
reference = lambda homogeneid: [wdi_core.WDItemID(release, PROPS['stated in'], is_reference=True),
wdi_core.WDExternalID(homogeneid, PROPS['HomoloGene ID'], is_reference=True)]
ec = 0
for taxid, subd in tqdm(d.items()):
for entrezgene, orthologs in tqdm(subd.items(), leave=False):
try:
do_item(entrezgene, orthologs, reference, entrez_homo, entrez_taxon, taxon_wdid, entrez_wdid, login,
write)
except Exception as e:
wdi_helpers.format_msg(entrezgene, PROPS['Entrez Gene ID'], None, str(e), type(e))
ec += 1
# clear the fast run store once we move on to the next taxon
wdi_core.WDItemEngine.fast_run_store = []
wdi_core.WDItemEngine.fast_run_container = None
print("Completed succesfully with {} exceptions".format(ec))
def do_item(entrezgene, orthologs, reference, entrez_homo, entrez_taxon, taxon_wdid, entrez_wdid, login, write):
entrezgene = str(entrezgene)
s = []
this_ref = reference(entrez_homo[entrezgene])
for ortholog in orthologs:
ortholog = str(ortholog)
if ortholog == entrezgene:
continue
if ortholog not in entrez_taxon:
raise ValueError("missing taxid for: " + ortholog)
qualifier = wdi_core.WDItemID(taxon_wdid[entrez_taxon[ortholog]], PROPS['found in taxon'], is_qualifier=True)
s.append(wdi_core.WDItemID(entrez_wdid[ortholog], PROPS['ortholog'],
references=[this_ref], qualifiers=[qualifier]))
item = wdi_core.WDItemEngine(wd_item_id=entrez_wdid[entrezgene], data=s, fast_run=fast_run,
fast_run_base_filter={PROPS['Entrez Gene ID']: '',
PROPS['found in taxon']: taxon_wdid[entrez_taxon[entrezgene]]},
core_props=core_props)
wdi_helpers.try_write(item, entrezgene, PROPS['Entrez Gene ID'], edit_summary="edit orthologs", login=login,
write=write)
# print(item.wd_item_id)
if __name__ == "__main__":
"""
Data to be used is retrieved from mygene.info
"""
parser = argparse.ArgumentParser(description='run wikidata gene bot')
parser.add_argument('--log-dir', help='directory to store logs', type=str)
parser.add_argument('--dummy', help='do not actually do write', action='store_true')
parser.add_argument('--fastrun', dest='fastrun', action='store_true')
parser.add_argument('--no-fastrun', dest='fastrun', action='store_false')
parser.set_defaults(fastrun=True)
args = parser.parse_args()
log_dir = args.log_dir if args.log_dir else "./logs"
run_id = datetime.now().strftime('%Y%m%d_%H:%M')
__metadata__['run_id'] = run_id
fast_run = args.fastrun
# get metadata about sources
mgd = MyGeneDownloader()
metadata = dict()
src = mgd.get_metadata()['src']
for source in src.keys():
metadata[source] = src[source]["version"]
log_name = '{}-{}.log'.format(__metadata__['name'], run_id)
if wdi_core.WDItemEngine.logger is not None:
wdi_core.WDItemEngine.logger.handles = []
wdi_core.WDItemEngine.setup_logging(log_dir=log_dir, log_name=log_name, header=json.dumps(__metadata__),
logger_name='orthologs')
main(metadata, log_dir=log_dir, fast_run=fast_run, write=not args.dummy)
|
A mix of old and new technology. Horse power on modern running gear. Photo by Peter van Beek. Click the image to view the photo album.
Peter van Beek has documented the difficult life of nomads in a modernizing Europe. Fear, stereotypes, and unfamiliarity dominate their way of life and place them into a partially self-imposed, marginalized portion of society. Although there is terrible poverty, he documents family life and survival of these remarkable people.
Simple shelter as used by our ancestors since the beginning of time.
It isn’t easy being a nomad in a modern technological world. There is easy place for this lifestyle.
The world has changed but many traditions have not.
There are certainly exceptions to nomadism. Many Romany cling to their traditions and morph them into a new lifestyle. All of our people have done this.
But it isn’t all oppressive poverty “By collecting and selling iron they get very rich and build their own village with huge palaces where they started living.” While settling down, the community keeps it’s own unique sense of style.
Hard work and some flexibility can make assimilation slightly easier.
Ethnic identity shows in this vernacular style.
Beautiful young women with a foot in both worlds.
|
import send_email
def main():
#Eingabe von Informationen
print("Willkomen bei der Installation von Watch My Pi! \nZum Nutzen der Software benötigt Watch My Pi! ihre E-Mail Adresse und Anmelde Daten \nDiese werden nur lokal gespeichert und kommen an keiner Stellle online. \nACHTUNG: Die Anmelde Daten inbegriffen dem Passwort sind zurzeit lokal zugreifbar, wir empfehlen daher eine eigene E-Mail nur für Watch My Pi! zu verwenden.")
emailData = input("\n E_Mail: \n ") +"|"
emailData += input("\n SMTP Adresse (Form: smtp.domain.com): \n ") +"|"
emailData += input("\n Passwort: \n ")
rightKey = False;
#TODO: Random key generation
key = "7T23C"
#Speichern der Daten als .txt Datei
path = "C:\\Users\\Hartmut\\Desktop\\testDatei.txt"
file = open(path, "w")
file.write(emailData)
file.close()
#Senden einer Email und bestätigen des Keys
send_email.main("Bitte übertragen sie diesen Schlüssel in die Konsole:\n\n"+key)
print("\nIhn wurde ein Schlüssel per E-Mail gesendet. Bitte bestätigen sie die Richtigkeit der angegebenen Daten indem sie den Schlüssel in die Konsole schreiben.")
while(rightKey):
if(input("Schlüssel: ")==key):
rightKey = True
else:
print("Der Schlüssel war leider falsch")
print("Se haben Watch My Pi erfolgreich instaliert. Viel Spaß damit! :)")
return True
if __name__ == '__main__':
main()
|
Mitchell joked once in Palo Alto that teachers should bring students out to his Livingston farm to pull weeds. With help from the Magnesons and other organizers, this joke turned into a series of educational field trips.
Students follow a regular — but enhanced — school curriculum with trips to Yosemite National Park, wetlands, beaches and the Central Valley to learn about ecosystems, land use and history.
The Magnesons offered several different learning stations on their land. Some students learned the ins and outs of a dairy farm. Others took a short hike down to the Merced River.
There on the banks, East Merced Resource Conservation District representatives Cindy Lashbrook and Cathy Weber explained the watershed and life-cycle of the salmon that used to crowd the river. Now the fishes’ numbers are dwindling.
“Basically, it’s all the area of land that drains into a water source,” she explained.
The students then participated in an exercise to study how salmon smell their way into different rivers, such as the San Joaquin, Merced and Tuolumne.
They made their way back to the barn soon after that, where another group of ninth graders stood near a bunch of wide-eyed, 3-month-old Holstein calves. There they listened to the Magneson’s son, Scott, talk about dairy farming.
Students learned about the retail end of farming and how cows are raised from newborn calves that drink milk from a bucket to milk cows standing in the stanchions to be milked.
The teens who rode a bus from Palo Alto had, for the most part, never been on a farm before. They were amazed to learn how much work goes into getting milk from the cow to the grocery store.
The 500-acre dairy they visited has been a farm for more than 100 years. The Magnesons used the land since 1949, and have made sure the farm will never be developed.
The river bottom land making up the farm was put into an easement that will always keep the land in agriculture. “The trust (that the land is in) takes the rights of the land so no development can be done — forever,” Charles Magneson said.
Along with a tour of the milk barn, calf barn and the area where milk cows hang out, the students also got a lesson about how farmers market their wares.
It wasn’t just students who learned from the field trip. One of their parents, Mary Dougherty, said although she often buys organic food for her family, she never realized the work that went into it.
“I thought that organic milk meant that the cows didn’t get hormones,” she said.
Both Dougherty and the students learned that going organic takes three years, and that cows must be fed organically-grown feeds, and must spend time at grass.
“These kids have absolutely learned a lot today,” Dougherty said.
|
import sys
sys.path.append("/home/mdupont/experiments/pythoscope/")
sys.path.append("/home/mdupont/experiments/pythoscope/pythoscope")
import pprint
import pythoscope
from pythoscope.store import Project,Function
import sys
sys.path.append("/home/mdupont/experiments/py-loadr-forkr-debugr")
import forkr
# forkr.set_logging() # turn on all logging
import inspect
import ast
import sys
sys.path.append("/home/mdupont/experiments/astunparse/lib/")
import astunparse
import pprint
from ast import *
def test_unparse_ast() :
print "Hello Python!"
def pythoscope_t3st():
project = Project.from_directory(".")
#inspect_project(project)
#add_tests_to_project(project, modules, template, force)
#modname = "foo"
#module = project.create_test_module_from_name(modname)
#pprint.pprint(module)
foo = Function("testfoo")
#module = project.find_module_by_full_path(modname)
#pprint.pprint(module)
#pprint.pprint(module.__dict__)
#pprint.pprint(dir(module))
#module.objects.append(foo)
#template = "unittest"
#generator = pythoscope.generator.TestGenerator.from_template(template)
#generator._add_tests_for_module(module, project, True)
code = test_unparse_ast
ast2 = ast.parse(inspect.getsource(code))
code2 = astunparse.unparse(ast2)
m = project.create_module("tests/foo123.py", code=code2)
#pprint.pprint(m.__dict__)
pprint.pprint(project.__dict__)
for module in project.get_modules():
module.changed =True
# print("Calling save() on module %r" % module.subpath)
# module.save()
project.save()
if __name__ == '__main__' :
pythoscope_test()
|
These two teens embody what ING’s partnership with UNICEF is all about.
Erion Nalli, 18, helped improve the safety of a street in his village with support from Power for Youth.
Meet Erion Nalli, a 18-year-old boy from a small village in Kosovo who wanted to fix a problem in his community. There was a main road without street lights that was dangerously dark – a road that kids use to get to school.
And then there’s Senija Lutvić, a 16-year-old Bosnian girl from Prizren, Kosovo, who wanted to do something to improve her community, but didn’t exactly know how.
What do these two teenagers have in common? They both participated in UNICEF Innovation Labs, supported by ING.
These labs are part of Power for Youth, ING’s partnership with UNICEF since 2005. The programme aims to give young people the knowledge and skills to become more socially and financially independent, improving both their own future as well as the futures of those around them.
Erion presents the plan to get lights on a dangerous street.
“I wanted to show that even young people can make a change, anywhere in the world,” says Erion.
“The programme helped us dig deeper into the problem,” he said. “We learned that the lack of street lights was also leading to accidents and incidents with stray dogs, among other things. Through the help of our mentors, we started tackling the problem in a holistic way.
ING and UNICEF have helped about one million children since we began partnering in 2005. We recently extended this partnership, now focusing on empowering adolescents in five countries: Kosovo, Montenegro, the Philippines, Vietnam and China. We teach them 21st century skills, including critical thinking, collaboration, and leadership.
This helps teenagers identify issues in their community and wider society, set goals, and solve problems with resilience and determination. Power for Youth’s focus on innovation helps adolescents develop into problem-solvers, decision-makers, and critical thinkers in both local and global contexts, thereby contributing to a more skilled workforce.
Senija Lutvić, 16, helped start a project to get clothes to people who need them.
She wound up helping to start ‘Wear and Care’, a social impact project that installed drop-boxes around the village to collect and donate clothes to people in need.
Senija explains her social impact project ‘Wear and Care’.
|
#!/usr/bin/python
import time
from Axon.background import background
from Kamaelia.UI.Pygame.Text import Textbox, TextDisplayer
from Axon.Handle import Handle
background().start()
try:
import Queue
queue = Queue # Python 3 compatibility change
except ImportError:
# Python 3 compatibility change
import queue
TD = Handle(
TextDisplayer(position=(20, 90),
text_height=36,
screen_width=900,
screen_height=200,
background_color=(130,0,70),
text_color=(255,255,255)
)
).activate()
TB = Handle(
Textbox(position=(20, 340),
text_height=36,
screen_width=900,
screen_height=400,
background_color=(130,0,70),
text_color=(255,255,255)
)
).activate()
message = "hello\n"
while 1:
time.sleep(1)
try:
data = TB.get("outbox")
print (data)
message = data
except queue.Empty:
pass
TD.put(message, "inbox")
|
Outstanding views to the Valley! Gorgeous Mediterranean residence of 4307 square feet, located in the prestigious gated enclave of Prominence. Timeless materials located throughout the property such as granite counter tops in kitchen and tile floors. You are able to enjoy the natural beauty of the outdoor dining with a great exterior kitchen, lavish landscape where you will be able to enjoy nature. You are able to entertain in the updated kitchen that features granite counter tops, gas range and sitting area around the island. Enjoy the cold nights of the valley next to the fireplace in the family room, perfect for gatherings, downstairs encounter a bedroom/office with a 1/2 bathroom downstairs. The Master Suite has a private balcony and you will be able to enjoy the views to the ridge, double side gas fireplace and on the other side of the fireplace you will enjoy your large jetted tub. Four additional rooms and one of those has an en-suite. Truly a must see to appreciate property.
|
import subprocess
from . import base
class ViewProtobuf(base.View):
"""Human friendly view of protocol buffers
The view uses the protoc compiler to decode the binary
"""
name = "Protocol Buffer"
prompt = ("protobuf", "p")
content_types = [
"application/x-protobuf",
"application/x-protobuffer",
]
@staticmethod
def is_available():
try:
p = subprocess.Popen(
["protoc", "--version"],
stdout=subprocess.PIPE
)
out, _ = p.communicate()
return out.startswith("libprotoc")
except:
return False
def decode_protobuf(self, content):
# if Popen raises OSError, it will be caught in
# get_content_view and fall back to Raw
p = subprocess.Popen(['protoc', '--decode_raw'],
stdin=subprocess.PIPE,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE)
out, err = p.communicate(input=content)
if out:
return out
else:
return err
def __call__(self, data, **metadata):
decoded = self.decode_protobuf(data)
return "Protobuf", base.format_text(decoded)
|
This plugin you can hide any div behind a Quantum [Paper] Button or Qutton.
Quantum Paper is a digital paper that can change its size, shape and color to accomodate new content. Quantum paper is part of Google's Material Design.
|
import datetime # use this only for checking types. use django.utils.datetime_safe for handling actual dates
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist, MultipleObjectsReturned
from django.db import models, transaction
from django.db.models import permalink, Q
from django.db.models.query import EmptyQuerySet
from django.utils.datetime_safe import date
from django.utils.translation import ugettext_lazy as _
from django.template.defaultfilters import slugify
from django.core.urlresolvers import reverse
from django.contrib.contenttypes.models import ContentType
from model_utils import Choices
from model_utils.managers import PassThroughManager
from model_utils.models import TimeStampedModel
from sorl.thumbnail import ImageField
from open_municipio.monitoring.models import MonitorizedItem
from open_municipio.newscache.models import NewsTargetMixin
from open_municipio.people.managers import ( TimeFramedQuerySet, GroupQuerySet,
ChargeQuerySet )
from open_municipio.om_utils.models import SlugModel
import open_municipio
from collections import Counter
#
# Persons, charges and groups
#
class Person(models.Model, MonitorizedItem):
"""
The ``related_news`` attribute can be used to fetch news related to a given person.
"""
FEMALE_SEX = 0
MALE_SEX = 1
SEX = Choices(
(MALE_SEX, _('Male')),
(FEMALE_SEX, _('Female')),
)
first_name = models.CharField(_('first name'), max_length=128)
last_name = models.CharField(_('last name'), max_length=128)
birth_date = models.DateField(_('birth date'))
birth_location = models.CharField(_('birth location'), blank=True, max_length=128)
slug = models.SlugField(unique=True, blank=True, null=True, max_length=128)
sex = models.IntegerField(_('sex'), choices=SEX)
op_politician_id = models.IntegerField(_('openpolis politician ID'), blank=True, null=True)
img = ImageField(upload_to="person_images", blank=True, null=True)
# manager to handle the list of monitoring having as content_object this instance
#monitoring_set = generic.GenericRelation(Monitoring, object_id_field='object_pk')
class Meta:
verbose_name = _('person')
verbose_name_plural = _('people')
def __unicode__(self):
return u'%s, %s' % (self.last_name, self.first_name)
def save(self, *args, **kwargs):
if self.slug is None:
self.slug = slugify("%s %s %s" % (self.first_name, self.last_name, self.birth_date))
super(Person, self).save(*args, **kwargs)
@permalink
def get_absolute_url(self):
return 'om_politician_detail', (), { 'slug': self.slug }
@property
def openpolis_link(self):
link = None
if self.op_politician_id:
link = settings.OP_URL_TEMPLATE % { "op_id":self.op_politician_id }
return link
@property
def is_om_user(self):
"""
check whether the person is a registered om user
"""
try:
prof = self.userprofile
return True
except ObjectDoesNotExist:
return False
@property
def full_name(self):
return "%s %s" % (self.first_name, self.last_name)
@property
def all_institution_charges(self):
"""
Returns the QuerySet of all institution charges held by this person during his/her career.
"""
return self.institutioncharge_set.select_related().all()
def get_past_institution_charges(self, moment=None):
return self.institutioncharge_set.select_related().past(moment=moment)\
.exclude(institution__institution_type__in=(Institution.COMMITTEE, Institution.JOINT_COMMITTEE))\
.order_by('-start_date')
past_institution_charges = property(get_past_institution_charges)
def get_current_institution_charges(self, moment=None):
"""
Returns the current institution charges at the given moment (no committees).
"""
return self.institutioncharge_set.select_related().current(moment=moment).exclude(
institution__institution_type__in=(Institution.COMMITTEE, Institution.JOINT_COMMITTEE)
)
current_institution_charges = property(get_current_institution_charges)
def get_current_committee_charges(self, moment=None):
"""
Returns the current committee charges, at the given moment.
"""
return self.institutioncharge_set.select_related().current(moment=moment).filter(
institution__institution_type__in=(Institution.COMMITTEE, Institution.JOINT_COMMITTEE)
).order_by('-institutionresponsability__charge_type','institution__position')
current_committee_charges = property(get_current_committee_charges)
def get_current_charge_in_institution(self, institution, moment=None):
"""
Returns the current charge in the given institution at the given moment.
Returns empty array if no charges are found.
"""
charges = self.institutioncharge_set.select_related().current(moment=moment).filter(
institution=institution
)
if charges.count() == 1:
return charges[0]
elif charges.count() == 0:
raise ObjectDoesNotExist
else:
raise MultipleObjectsReturned
def has_current_charges(self, moment=None):
"""
Used for admin interface
"""
if self.institutioncharge_set.current(moment).count() > 0:
return True
else:
return False
has_current_charges.short_description = _('Current')
def is_counselor(self, moment=None):
"""
check if the person is a member of the council at the given moment
"""
if self.current_counselor_charge(moment):
return True
else:
return False
def current_counselor_charge(self, moment=None):
"""
fetch the current charge in Council, if any
"""
i = Institution.objects.get(institution_type=Institution.COUNCIL)
try:
ic = self.get_current_charge_in_institution(i, moment)
return ic
except ObjectDoesNotExist:
return None
def last_charge(self, moment=None):
"""
last charge, if any
"""
charges = self.current_institution_charges if self.has_current_charges() else self.past_institution_charges
if charges.count() > 0:
return charges[0]
else:
raise ObjectDoesNotExist
def get_historical_groupcharges(self, moment=None):
"""
Returns all groupcharges for the person
"""
i = Institution.objects.get(institution_type=Institution.COUNCIL)
try:
ic = self.get_current_charge_in_institution(i, moment)
gc = GroupCharge.objects.select_related().past(moment).filter(charge=ic)
except ObjectDoesNotExist:
gc = None
return gc
historical_groupcharges = property(get_historical_groupcharges)
def get_current_groupcharge(self, moment=None):
"""
Returns GroupCharge at given moment in time (now if moment is None)
Charge is the IntstitutionalCharge in the council
"""
i = Institution.objects.get(institution_type=Institution.COUNCIL)
try:
ic = self.get_current_charge_in_institution(i, moment)
gc = GroupCharge.objects.select_related().current(moment).get(charge=ic)
except ObjectDoesNotExist:
gc = None
return gc
current_groupcharge = property(get_current_groupcharge)
def get_current_group(self, moment=None):
"""
Returns group at given moment in time (now if moment is None)
Group is computed from GroupCharge where Charge is the IntstitutionalCharge in the council
Returns None if there is no current group.
"""
gc = self.get_current_groupcharge(moment)
if gc is None:
return None
return gc.group
current_group = property(get_current_group)
@property
def resources(self):
"""
Returns the list of resources associated with this person
"""
return self.resource_set.all()
@property
def content_type_id(self):
"""
Return id of the content type associated with this instance.
"""
return ContentType.objects.get_for_model(self).id
@property
def age(self):
"""
Returns an integer of year between birth_date and now
"""
#end_date = in_date if in_date else date.today()
return (date.today() - self.birth_date).days / 365
@property
def related_news(self):
"""
News related to a politician are the union of the news related to allthe politician's
current and past institution charges
"""
news = EmptyQuerySet()
for c in self.all_institution_charges:
news |= c.related_news
return news
@property
def speeches(self):
"""
Speeches of a politician
"""
from open_municipio.acts.models import Speech
return Speech.objects.filter(author=self)
@property
def n_speeches(self):
"""
Number of speeches of a politician
"""
return self.speeches.count()
@property
def speeches_size(self):
"""
Number of speeches of a politician
"""
return sum([s.text_size for s in self.speeches.all()])
class Resource(models.Model):
"""
This class maps the internet resources (mail, web sites, rss, facebook, twitter, )
It must be subclassed, by a PersonResource, InstitutionResource or GroupResource class.
The `value` field contains the resource.
The `description` field may be used to specify the context.
A `PERSON` resource may be a secretary, a responsible. We're interested only in
her name, it must not be mapped into the system.
"""
RES_TYPE = Choices(
('EMAIL', 'email', _('email')),
('URL', 'url', _('url')),
('PHONE', 'phone', _('phone')),
('FAX', 'fax', _('fax')),
('SNAIL', 'snail', _('snail mail')),
('PERSON', 'person', _('person')),
('TWITTER', 'twitter', _('twitter')),
('FACEBOOK', 'facebook', _('facebook')),
('FINANCIAL', 'financial', _('financial information')),
)
resource_type = models.CharField(verbose_name=_('type'), max_length=10, choices=RES_TYPE)
# 2000 chars is the maximum length suggested for url length (see: http://stackoverflow.com/questions/417142/what-is-the-maximum-length-of-a-url-in-different-browsers )
value = models.CharField(verbose_name=_('value'), max_length=2000)
description = models.CharField(verbose_name=_('description'), max_length=255, blank=True)
class Meta:
abstract = True
verbose_name = _('Resource')
verbose_name_plural = ('Resources')
class PersonResource(Resource):
person = models.ForeignKey('Person', verbose_name=_('person'), related_name='resource_set')
class InstitutionResource(Resource):
institution = models.ForeignKey('Institution', verbose_name=_('institution'), related_name='resource_set')
class GroupResource(Resource):
group = models.ForeignKey('Group', verbose_name=_('group'), related_name='resource_set')
class Charge(NewsTargetMixin, models.Model):
"""
This is the base class for the different macro-types of charges (institution, organization, administration).
The ``related_news`` attribute can be used to fetch news items related to a given charge.
"""
person = models.ForeignKey('Person', verbose_name=_('person'))
start_date = models.DateField(_('start date'))
end_date = models.DateField(_('end date'), blank=True, null=True)
end_reason = models.CharField(_('end reason'), blank=True, max_length=255)
description = models.CharField(_('description'), blank=True, max_length=255,
help_text=_('Insert the complete description of the charge, if it gives more information than the charge type'))
# objects = PassThroughManager.for_queryset_class(TimeFramedQuerySet)()
objects = PassThroughManager.for_queryset_class(ChargeQuerySet)()
class Meta:
abstract = True
def get_absolute_url(self):
return self.person.get_absolute_url()
# @property
def is_in_charge(self, as_of=None):
if not as_of:
#as_of = datetime.now()
as_of = date.today()
# if a datetime, extract the date part
if isinstance(as_of, datetime.datetime):
as_of = as_of.date()
# check we receive a date (note: a datetime is also a date, but
# we already took care of this case in the previous lines)
if not isinstance(as_of, datetime.date):
raise ValueError("The passed parameter is not a date")
return as_of >= self.start_date and (not self.end_date or as_of <= self.end_date)
@property
def duration(self):
if not self.start_date: return None
# return (self.end_date if self.end_date else datetime.datetime.now().date()) - self.start_date
return (self.end_date if self.end_date else date.today()) - self.start_date
@property
def speeches(self):
"""
Speeches of a charge
"""
start_date = self.start_date;
end_date = self.end_date if self.end_date else datetime.datetime.now();
return open_municipio.acts.models.Speech.objects.filter(\
author=self.person, sitting_item__sitting__date__range=(start_date, end_date))
@property
def n_speeches(self):
"""
Number of speeches of a charge
"""
return self.speeches.count()
@property
def speeches_size(self):
"""
Number of speeches of a charge
"""
return sum([s.text_size for s in self.speeches.all()])
class ChargeResponsability(models.Model):
"""
Describes a responsability that the charge has
inside the charge's *container*. It integrates the composition relation.
For example: a counselor may be the president of the council.
This is an abstract class, that must be subclassed, in order to specify
the context (institution charge or group charge)
"""
start_date = models.DateField(_('start date'))
end_date = models.DateField(_('end date'), blank=True, null=True)
description = models.CharField(_('description'), blank=True, max_length=255,
help_text=_('Insert an extended description of the responsability'))
objects = PassThroughManager.for_queryset_class(TimeFramedQuerySet)()
class Meta:
abstract = True
class InstitutionCharge(Charge):
"""
This is a charge in the institution (city council, city government, mayor, committee).
"""
substitutes = models.OneToOneField('InstitutionCharge', blank=True, null=True,
related_name='reverse_substitute_set',
on_delete=models.PROTECT,
verbose_name=_('in substitution of'))
substituted_by = models.OneToOneField('InstitutionCharge', blank=True, null=True,
related_name='reverse_substituted_by_set',
on_delete=models.PROTECT,
verbose_name=_('substituted by'))
institution = models.ForeignKey('Institution', on_delete=models.PROTECT, verbose_name=_('institution'), related_name='charge_set')
op_charge_id = models.IntegerField(_('openpolis institution charge ID'), blank=True, null=True)
original_charge = models.ForeignKey('InstitutionCharge', blank=True, null=True,
related_name='committee_charge_set',
verbose_name=_('original institution charge'))
n_rebel_votations = models.IntegerField(default=0)
n_present_votations = models.IntegerField(default=0, verbose_name=_("number of presences during votes"))
n_absent_votations = models.IntegerField(default=0, verbose_name=_("number of absences during votes"))
n_present_attendances = models.IntegerField(default=0, verbose_name=_("number of present attendances"))
n_absent_attendances = models.IntegerField(default=0, verbose_name=_("number of absent attendances"))
can_vote = models.BooleanField(default=True, verbose_name=_("in case of a city council member, specifies whether he/she can vote"))
def get_absolute_url(self):
url = None
if self.institution.institution_type == Institution.COMMITTEE:
url = self.person.get_absolute_url()
else:
url = reverse("om_politician_detail",
kwargs={"slug":self.person.slug,
"institution_slug": self.institution.slug,
"year":self.start_date.year, "month": self.start_date.month,
"day":self.start_date.day })
return url
def is_counselor(self):
return self.institution.institution_type == Institution.COUNCIL
@property
def is_in_city_government(self):
return (self.institution.institution_type == Institution.CITY_GOVERNMENT or \
self.institution.institution_type == Institution.MAYOR)
class Meta(Charge.Meta):
db_table = u'people_institution_charge'
verbose_name = _('institution charge')
verbose_name_plural = _('institution charges')
ordering = ['person__first_name', 'person__last_name']
def __unicode__(self):
if self.denomination:
return u"%s %s - %s" % (self.person.first_name, self.person.last_name, self.denomination)
else:
return u"%s %s" % (self.person.first_name, self.person.last_name)
# TODO: model validation: check that ``substitutes`` and ``substituted_by`` fields
# point to ``InstitutionCharge``s of the same kind
@property
def denomination(self):
if self.institution.institution_type == Institution.MAYOR:
denomination = _('Mayor') #.translate(settings.LANGUAGE_CODE) #-FS why?
if self.description != "":
denomination += ", %s" % self.description
return denomination
elif self.institution.institution_type == Institution.CITY_GOVERNMENT:
if self.responsabilities.count():
s = self.responsabilities[0].get_charge_type_display()
if self.responsabilities[0].charge_type == InstitutionResponsability.CHARGE_TYPES.firstdeputymayor:
s += ", %s" % self.description
return "%s" % (s, )
else:
return " %s" % self.description
elif self.institution.institution_type == Institution.COUNCIL:
if self.responsabilities.count():
return "%s Consiglio Comunale" % (self.responsabilities[0].get_charge_type_display(),)
else:
return _('Counselor')
elif self.institution.institution_type == Institution.COMMITTEE:
if self.responsabilities.count():
return "%s" % (self.responsabilities[0].get_charge_type_display())
else:
return _('Member').translate(settings.LANGUAGE_CODE)
else:
return ''
@property
def committee_charges(self):
return self.committee_charge_set.all()
@property
def responsabilities(self):
return self.institutionresponsability_set.all()
def get_current_responsability(self, moment=None):
"""
Returns the current group responsability, if any
"""
if self.responsabilities.current(moment=moment).count() == 0:
return None
if self.responsabilities.current(moment=moment).count() == 1:
return self.responsabilities.current(moment=moment)[0]
raise MultipleObjectsReturned
current_responsability = property(get_current_responsability)
@property
def presented_acts(self):
"""
The QuerySet of acts presented by this charge.
"""
return self.presented_act_set.all()
@property
def n_presented_acts(self):
"""
The number of acts presented by this charge
"""
return self.presented_acts.count()
@property
def received_acts(self):
"""
The QuerySet of acts received by this charge.
"""
return self.received_act_set.all()
@property
def n_received_acts(self):
"""
The QuerySet of acts received by this charge.
"""
return self.received_act_set.count()
@property
def charge_type(self):
"""
Returns the basic charge type translated string, according to the institution.
For example: the council president's basic type is counselor.
"""
if self.institution.institution_type == Institution.MAYOR:
return _('Mayor')
elif self.institution.institution_type == Institution.CITY_GOVERNMENT:
return _('City government member')
elif self.institution.institution_type == Institution.COUNCIL:
return _('Counselor')
elif self.institution.institution_type == Institution.COMMITTEE:
return _('Committee member')
else:
return 'Unknown charge type!'
@property
def charge_type_verbose(self):
"""
"""
s = self.charge_type
if self.start_date:
if self.end_date and self.start_date.year == self.end_date.year:
s += ' nel ' + str(self.start_date.year)
else:
s += ' dal ' + str(self.start_date.year)
if self.end_date:
s += ' al ' + str(self.end_date.year)
return s
@property
def council_group(self):
"""
DEPRECATED: use `self.current_groupcharge.group`
Returns the city council's group this charge currently belongs to (if any).
"""
return self.current_groupcharge.group
@property
def current_groupcharge(self):
"""
Returns the current group related to a council charge (end_date is null).
A single GroupCharge object is returned. The group may be accessed by the `.group` attribute
A Council Institution charge MUST have one group.
Other types of charge do not have a group, so None is returned.
"""
return self.current_at_moment_groupcharge()
def current_at_moment_groupcharge(self, moment=None):
"""
Returns groupcharge at given moment in time.
If moment is None, then current groupcharge is returned
"""
if self.institution.institution_type == Institution.COUNCIL:
try:
return GroupCharge.objects.select_related().current(moment=moment).get(charge__id=self.id)
except GroupCharge.DoesNotExist:
return None
elif self.original_charge and \
(self.institution.institution_type == Institution.COMMITTEE or \
self.institution.institution_type == Institution.JOINT_COMMITTEE):
try:
return GroupCharge.objects.select_related().current(moment=moment).get(charge=self.original_charge)
except GroupCharge.DoesNotExist:
return None
else:
return None
@property
def historical_groupcharges(self):
"""
Returns the list of past groups related to a council charge (end_date is not null).
A list of GroupCharge objects is returned. The group may be accessed by the `.group` attribute
"""
if self.institution.institution_type == Institution.COUNCIL:
return GroupCharge.objects.select_related().past().filter(charge__id=self.id)
else:
return []
def update_rebellion_cache(self):
"""
Re-compute the number of votations where the charge has vote differently from her group
and update the n_rebel_votations counter
"""
self.n_rebel_votations = self.chargevote_set.filter(is_rebel=True).count()
self.save()
def update_presence_cache(self):
"""
Re-compute the number of votations where the charge was present/absent
and update the respective counters
"""
from open_municipio.votations.models import ChargeVote
from open_municipio.attendances.models import ChargeAttendance
absent = ChargeVote.VOTES.absent
self.n_present_votations = self.chargevote_set.exclude(vote=absent).count()
self.n_absent_votations = self.chargevote_set.filter(vote=absent).count()
self.n_present_attendances = self.chargeattendance_set.filter(value=ChargeAttendance.VALUES.pres).count()
self.n_absent_attendances = self.chargeattendance_set.exclude(value=ChargeAttendance.VALUES.pres).count()
self.save()
@property
def taxonomy_count(self):
count = { 'categories' : Counter(), 'tags' : Counter(), 'topics' : Counter(), 'locations' : Counter() }
for act in self.presented_acts:
count['categories'].update(act.categories)
count['tags'].update(act.tags)
count['locations'].update(act.locations)
return count
class InstitutionResponsability(ChargeResponsability):
"""
Responsability for institutional charges.
"""
CHARGE_TYPES = Choices(
('MAYOR', 'mayor', _('Mayor')),
('FIRSTDEPUTYMAYOR', 'firstdeputymayor', _('First deputy mayor')),
('PRESIDENT', 'president', _('President')),
('VICE', 'vice', _('Vice president')),
('VICEVICE', 'vicevice', _('Vice vice president')),
)
charge = models.ForeignKey(InstitutionCharge, verbose_name=_('charge'))
charge_type = models.CharField(_('charge type'), max_length=16, choices=CHARGE_TYPES)
class Meta:
verbose_name = _('institutional responsability')
verbose_name_plural = _('institutional responsabilities')
class CompanyCharge(Charge):
"""
This is a charge in a company controlled by the municipality (it: partecipate).
"""
CEO_CHARGE = 1
PRES_CHARGE = 2
VICE_CHARGE = 3
DIR_CHARGE = 4
CHARGE_TYPES = Choices(
(CEO_CHARGE, _('Chief Executive Officer')),
(PRES_CHARGE, _('President')),
(VICE_CHARGE, _('Vice president')),
(DIR_CHARGE, _('Member of the board')),
)
company = models.ForeignKey('Company', on_delete=models.PROTECT, verbose_name=_('company'), related_name='charge_set')
charge_type = models.IntegerField(_('charge type'), choices=CHARGE_TYPES)
class Meta(Charge.Meta):
db_table = u'people_organization_charge'
verbose_name = _('organization charge')
verbose_name_plural = _('organization charges')
def __unicode__(self):
# TODO: implement ``get_charge_type_display()`` method
return u'%s - %s' % (self.get_charge_type_display(), self.company.name)
class AdministrationCharge(Charge):
"""
This is a charge in the internal municipality administration.
"""
DIR_CHARGE = 1
EXEC_CHARGE = 2
CHARGE_TYPES = Choices(
(DIR_CHARGE, _('Director')),
(EXEC_CHARGE, _('Executive')),
)
office = models.ForeignKey('Office', on_delete=models.PROTECT, verbose_name=_('office'), related_name='charge_set')
charge_type = models.IntegerField(_('charge type'), choices=CHARGE_TYPES)
class Meta(Charge.Meta):
db_table = u'people_administration_charge'
verbose_name = _('administration charge')
verbose_name_plural = _('administration charges')
def __unicode__(self):
# TODO: implement ``get_charge_type_display()`` method
return u'%s - %s' % (self.get_charge_type_display(), self.office.name)
class Group(models.Model):
"""
This model represents a group of counselors.
"""
name = models.CharField(max_length=100)
acronym = models.CharField(blank=True, max_length=16)
charge_set = models.ManyToManyField('InstitutionCharge', through='GroupCharge')
slug = models.SlugField(unique=True, blank=True, null=True, help_text=_('Suggested value automatically generated from name, must be unique'))
img = ImageField(upload_to="group_images", blank=True, null=True)
start_date = models.DateField(blank=True, null=True, verbose_name=_("start date"))
end_date = models.DateField(blank=True, null=True, verbose_name=_("end date"))
objects = PassThroughManager.for_queryset_class(GroupQuerySet)()
class Meta:
verbose_name = _('group')
verbose_name_plural = _('groups')
ordering = ("name", "acronym", )
def get_absolute_url(self):
return reverse("om_institution_group", kwargs={'slug': self.slug})
def __unicode__(self):
if self.start_date:
return u'%s (%s, %s)' % (self.name, self.acronym, self.start_date.year)
else:
return u'%s (%s)' % (self.name, self.acronym)
@property
def leader(self):
"""
The current leader of the Group as GroupResponsability.
None if not found.
To fetch the InstitutionCharge, .groupcharge.charge.
"""
try:
leader = GroupResponsability.objects.select_related().get(
charge__group=self,
charge_type=GroupResponsability.CHARGE_TYPES.leader,
end_date__isnull=True
)
return leader
except ObjectDoesNotExist:
return None
@property
def deputy(self):
"""
The current deputy leader of the Group as GroupResponsability.
None if not found.
To fetch the InstitutionCharge, .groupcharge.charge.
"""
try:
deputy = GroupResponsability.objects.select_related().get(
charge__group=self,
charge_type=GroupResponsability.CHARGE_TYPES.deputy,
end_date__isnull=True
)
return deputy
except ObjectDoesNotExist:
return None
@property
def members(self):
"""
Current members of the group, as institution charges, leader and
council president and vice presidents **excluded**.
"""
group_members = self.groupcharge_set.current().exclude(
groupresponsability__charge_type__in=(
GroupResponsability.CHARGE_TYPES.leader,
GroupResponsability.CHARGE_TYPES.deputy
),
groupresponsability__end_date__isnull=True
)
return self.institution_charges.filter(groupcharge__in=group_members)
"""
President and vice-president may be excluded
.exclude(
groupcharge__charge__institutionresponsability__charge_type__in=(
InstitutionResponsability.CHARGE_TYPES.president,
InstitutionResponsability.CHARGE_TYPES.vice
)
)
"""
@property
def alpha_members(self):
"""
Alphabetically sorted members
"""
return self.members.order_by('person__last_name')
def get_institution_charges(self, moment=None):
"""
All current institution charges in the group, leader **included**
"""
return self.charge_set.all().current(moment=moment)
institution_charges = property(get_institution_charges)
@property
def current_size(self):
"""
returns number of current charges
"""
return self.groupcharge_set.current().count()
@property
def is_current(self):
"""
returns True if the group has at least one current charge
"""
return self.groupcharge_set.current().count() > 0
@property
def majority_records(self):
return self.groupismajority_set.all()
@property
def in_council_now(self):
today = date.today()
found = self.majority_records.filter(Q(end_date__gt=today) | Q(end_date__isnull=True))
return found.count() > 0
@property
def is_majority_now(self):
# only one majority record with no ``end_date`` (or with an ``end_date``
# set in the future) should exists at a time (i.e. the current one)
today = date.today()
found = self.majority_records.filter(is_majority=True).exclude(end_date__lt=today)
return found.count() > 0
@property
def resources(self):
return self.resource_set.all()
class GroupCharge(models.Model):
"""
This model records the historical composition of council groups.
This only makes sense for ``InstitutionCharges``.
"""
group = models.ForeignKey('Group', verbose_name=_("group"))
charge = models.ForeignKey('InstitutionCharge', verbose_name=_("charge"))
charge_description = models.CharField(blank=True, max_length=255, verbose_name=_("charge description"))
start_date = models.DateField(verbose_name=_("start date"))
end_date = models.DateField(blank=True, null=True, verbose_name=_("end date"))
end_reason = models.CharField(blank=True, max_length=255, verbose_name=_("end reason"))
objects = PassThroughManager.for_queryset_class(TimeFramedQuerySet)()
@property
def responsabilities(self):
return self.groupresponsability_set.all()
def get_current_responsability(self, moment=None):
"""
Returns the current group responsability, if any
"""
if self.responsabilities.current(moment=moment).count() == 0:
return None
if self.responsabilities.current(moment=moment).count() == 1:
return self.responsabilities.current(moment=moment)[0]
raise MultipleObjectsReturned
current_responsability = property(get_current_responsability)
@property
def responsability(self):
if self.responsabilities.count() == 1:
r = self.responsabilities[0]
end_date = ""
if r.end_date:
end_date = " - %s" % r.end_date
s = "%s: %s%s" % (r.get_charge_type_display(), r.start_date, end_date)
return s
else:
return ""
class Meta:
db_table = u'people_group_charge'
verbose_name = _('group charge')
verbose_name_plural = _('group charges')
def __unicode__(self):
if self.responsability:
return u"%s - %s - %s" % (self.group.acronym, self.charge.person, self.responsability)
else:
return u"%s - %s" % (self.group.acronym, self.charge.person)
class GroupResponsability(ChargeResponsability):
"""
Responsibility for group charges.
"""
CHARGE_TYPES = Choices(
('LEADER', 'leader', _('Group leader')),
('DEPUTY', 'deputy', _('Group deputy leader')),
)
charge_type = models.CharField(_('charge type'), max_length=16, choices=CHARGE_TYPES)
charge = models.ForeignKey(GroupCharge, verbose_name=_('charge'))
def __unicode__(self):
end_date = ""
if self.end_date:
end_date = " - %s" % self.end_date
return u"%s (%s%s)" % (self.get_charge_type_display(), self.start_date, end_date)
class Meta:
verbose_name = _("group responsibility")
verbose_name_plural = _("group responsibilities")
class GroupIsMajority(models.Model):
"""
This model records the historical composition of the majority
"""
group = models.ForeignKey('Group')
is_majority = models.NullBooleanField(_('Is majority'), default=False, null=True)
start_date = models.DateField(_('Start date'))
end_date = models.DateField(_('End date'), blank=True, null=True)
objects = PassThroughManager.for_queryset_class(TimeFramedQuerySet)()
class Meta:
verbose_name = _('group majority')
verbose_name_plural = _('group majorities')
def __unicode__(self):
if self.is_majority:
return u'yes'
elif self.is_majority is False:
return u'no'
else:
return u'na'
#
# Bodies
#
class Body(SlugModel):
"""
The base model for bodies.
Uses the *abstract base class* inheritance model.
"""
name = models.CharField(_('name'), max_length=255)
slug = models.SlugField(unique=True, blank=True, null=True, help_text=_('Suggested value automatically generated from name, must be unique'))
description = models.TextField(_('description'), blank=True)
@property
def lowername(self):
return self.name.lower()
class Meta:
abstract = True
def __unicode__(self):
return u'%s' % (self.name,)
class Institution(Body):
"""
Institutional bodies can be of different types (as specified by the ``institution_type`` field).
This model has a relation with itself, in order to map hierarchical bodies (joint committees, ...).
"""
MAYOR = 1
CITY_GOVERNMENT = 2
COUNCIL = 3
COMMITTEE = 4
JOINT_COMMITTEE = 5
INSTITUTION_TYPES = Choices(
(MAYOR, _('Mayor')),
(COUNCIL, _('Council')),
(CITY_GOVERNMENT, _('Town government')),
(COMMITTEE, _('Committee')),
(JOINT_COMMITTEE, _('Joint committee')),
)
parent = models.ForeignKey('Institution', related_name='sub_body_set', blank=True, null=True)
institution_type = models.IntegerField(choices=INSTITUTION_TYPES)
position = models.PositiveIntegerField(editable=False, default=0)
class Meta(Body.Meta):
verbose_name = _('institution')
verbose_name_plural = _('institutions')
ordering = ('position',)
def save(self, *args, **kwargs):
"""slugify name on first save"""
if not self.id:
self.slug = slugify(self.name)
# set position
qs = self.__class__.objects.order_by('-position')
try:
self.position = qs[0].position + 1
except IndexError:
self.position = 0
super(Institution, self).save(*args, **kwargs)
def get_absolute_url(self):
if self.institution_type == self.MAYOR:
return reverse("om_institution_mayor")
elif self.institution_type == self.CITY_GOVERNMENT:
return reverse("om_institution_citygov")
elif self.institution_type == self.COUNCIL:
return reverse("om_institution_council")
elif self.institution_type == self.COMMITTEE:
return reverse("om_institution_committee", kwargs={'slug': self.slug})
@property
def sittings(self):
"""
A Sitting is linked to an Institution trhough fields "institution" and
"other_institution". The related name of the former is "sitting_set",
while the related name of the latter is "other_sittings". If you want to
know all the sittings of this Institution you must take the (distinct)
union of the two
"""
qs = (self.sitting_set.all() | self.other_sittings.all()).distinct()
return qs
@property
def name_with_preposition(self):
"""
returns name with preposition
"""
if self.institution_type == self.MAYOR:
return "del %s" % self.name
elif self.institution_type == self.CITY_GOVERNMENT:
return "della %s" % self.name
elif self.institution_type == self.COUNCIL:
return "del %s" % self.name
elif self.institution_type == self.COMMITTEE:
return "della %s" % self.name
return self.name
@property
def charges(self):
"""
The QuerySet of all *current* charges (``InstitutionCharge`` instances)
associated with this institution.
"""
return self.get_current_charges(moment=None)
def get_current_charges(self, moment=None):
"""
The WS of all charges current at the specified moment
"""
return self.charge_set.all().current(moment)
@property
def firstdeputy(self):
"""
The current firstdeputy mayor of the institution as InstitutionResponsability.
None if not found.
To access the charge: firstdeputy.charge
"""
try:
return InstitutionResponsability.objects.select_related().get(
charge__institution=self,
charge_type=InstitutionResponsability.CHARGE_TYPES.firstdeputymayor,
end_date__isnull=True
)
except ObjectDoesNotExist:
return None
@property
def president(self):
"""
The current president of the institution as InstitutionResponsability.
None if not found.
To access the charge: pres.charge
"""
try:
pres = InstitutionResponsability.objects.select_related().get(
charge__institution=self,
charge_type=InstitutionResponsability.CHARGE_TYPES.president,
end_date__isnull=True
)
return pres
except ObjectDoesNotExist:
return None
@property
def vicepresidents(self):
"""
The current vice presidents of the institution, as InstitutionResponsabilities
There can be more than one vicepresident.
To access the charge: vp.charge
"""
return InstitutionResponsability.objects.select_related().filter(
charge__institution=self,
charge_type=InstitutionResponsability.CHARGE_TYPES.vice,
end_date__isnull=True
)
@property
def members(self):
"""
Members of the institution, as charges.
Current mayor, first deputy, president and vice presidents **excluded**.
"""
return self.charges.exclude(
institutionresponsability__charge_type__in=(
InstitutionResponsability.CHARGE_TYPES.mayor,
InstitutionResponsability.CHARGE_TYPES.firstdeputymayor,
InstitutionResponsability.CHARGE_TYPES.president,
InstitutionResponsability.CHARGE_TYPES.vice,
),
institutionresponsability__end_date__isnull=True
).select_related()
@property
def emitted_acts(self):
"""
The QuerySet of all acts emitted by this institution.
Note that the objects comprising the resulting QuerySet aren't generic ``Act`` instances,
but instances of specific ``Act`` subclasses (i.e. ``Deliberation``, ``Motion``, etc.).
This is made possible by the fact that the default manager for the ``Act`` model is
``model_utils.managers.InheritanceManager``, and this manager class declares
``use_for_related_fields = True``. See `Django docs`_ for details.
.. _`Django docs`: https://docs.djangoproject.com/en/1.3/topics/db/managers/#controlling-automatic-manager-types
"""
# NOTE: See also Django bug #14891
return self.emitted_act_set.all().select_subclasses()
@property
def resources(self):
return self.resource_set.all()
@transaction.commit_on_success
def _move(self, up):
"""
To move an object requires, potentially, to update all the list of objects.
In fact, we cannot assume that the position arguments are all consecutive.
Doing some insertions and deletions it is possible to create "bubbles" and
duplicates for the position values. The sorting algorithms goes like this:
- assign everyone a consecutive and unique position value
- detect the previous and next institution, w.r.t. self
- if up, switch position with previous and save previous
- if down, switch position with next and save next
- save self
"""
qs = self.__class__._default_manager
qs.order_by("position")
p = 0
prev_inst = None
next_inst = None
found = False
for curr_inst in qs.all():
found = found or (curr_inst == self)
if curr_inst.position != p:
curr_inst.position = p
curr_inst.save()
p = p + 1
if not found:
prev_inst = curr_inst
elif next_inst is None and curr_inst != self:
next_inst = curr_inst
if up:
if prev_inst:
prev_inst.position,self.position = self.position,prev_inst.position
prev_inst.save()
else:
if next_inst:
next_inst.position,self.position = self.position,next_inst.position
next_inst.save()
self.save()
def move_down(self):
"""
Move this object down one position.
"""
return self._move(up=False)
def move_up(self):
"""
Move this object up one position.
"""
return self._move(up=True)
class Company(Body):
"""
A company owned by the municipality, whose executives are nominated politically.
"""
class Meta(Body.Meta):
verbose_name = _('company')
verbose_name_plural = _('companies')
def get_absolute_url(self):
return reverse("om_company_detail", kwargs={'slug': self.slug})
@property
def charges(self):
"""
The QuerySet of all *current* charges (``CompanyCharge`` instances)
associated with this company.
"""
return self.charge_set.current()
class Office(Body):
"""
Internal municipality office, playing a role in municipality's administration.
"""
parent = models.ForeignKey('Office', blank=True, null=True, default=None, verbose_name=_("the parent office, in a hierarchy"))
class Meta(Body.Meta):
verbose_name = _('office')
verbose_name_plural = _('offices')
def get_abolute_url(self):
return reverse("om_office_detail", kwargs={'slug': self.slug})
@property
def charges(self):
"""
The QuerySet of all *current* charges (``AdministrationCharge`` instances)
associated with this office.
"""
return self.charge_set.current()
#
# Sittings
#
class Sitting(TimeStampedModel):
"""
A sitting models a gathering of people in a give institution.
Usually votations and speeches occur, during a sitting.
A sitting is broken down into SittingItems, and each item may be related to one or more acts.
Each item contains Speeches, which are a very special extension of Document
(audio attachments, with complex relations with votations, charges and acts).
"""
idnum = models.CharField(blank=True, max_length=64, verbose_name=_("identifier"))
date = models.DateField(verbose_name=_("date"))
number = models.IntegerField(blank=True, null=True, verbose_name=_("number"))
call = models.IntegerField(blank=True, null=True, verbose_name=_("call"))
institution = models.ForeignKey(Institution, on_delete=models.PROTECT, verbose_name=_("institution"))
other_institution_set = models.ManyToManyField(Institution, blank=True, null=True, verbose_name=_("other institutions"), related_name="other_sittings")
minute = models.ForeignKey('acts.Minute', null=True, blank=True, related_name="sitting_set", verbose_name=_("minute"))
class Meta:
verbose_name = _('sitting')
verbose_name_plural = _('sittings')
def __unicode__(self):
num = ""
if self.number:
num = " num. %s " % self.number
return u'Seduta %s del %s (%s)' % (num, self.date.strftime('%d/%m/%Y'), self.institution.name)
@property
def other_institutions(self):
return self.other_institution_set.all()
@property
def institutions(self):
qs = Institution.objects.none()
if self.institution_id != None:
qs = Institution.objects.filter(id=self.institution_id)
qs = (qs | self.other_institution_set.all()).distinct()
return qs
@property
def sitting_items(self):
return SittingItem.objects.filter(sitting=self)
@property
def num_items(self):
return self.sitting_items.count()
@permalink
def get_absolute_url(self):
prefix = "%s-%s-%s" % (self.institution.slug, self.idnum, self.date, )
sitting_url = 'om_sitting_detail', (), { 'prefix':prefix, 'pk':self.pk, }
return sitting_url
@property
def sitting_next(self):
next = Sitting.objects.filter(date__gt=self.date,institution=self.institution).order_by("date")[:1]
if len(next) == 0:
return None
else:
return next[0]
@property
def sitting_prev(self):
prev = Sitting.objects.filter(date__lt=self.date,institution=self.institution).order_by("-date")[:1]
if len(prev) == 0:
return None
else:
return prev[0]
class SittingItem(models.Model):
"""
A SittingItem maps a single point of discussion in a Sitting.
It can be of type:
- odg - a true items of discussion
- procedural - a procedural issue, discussed, mostly less relevant
- intt - interrogations and interpellances (questions and answers), usually discussed at the beginning of the sitting
SittingItems are ordered through the seq_order field.
"""
ITEM_TYPE = Choices(
('ODG', 'odg', _('ordine del giorno')),
('PROC', 'procedural', _('questione procedurale')),
('INTT', 'intt', _('interrogation')),
)
sitting = models.ForeignKey(Sitting)
title = models.CharField(max_length=512)
item_type = models.CharField(choices=ITEM_TYPE, max_length=4)
seq_order = models.IntegerField(default=0,verbose_name=_('seq_order'))
related_act_set = models.ManyToManyField('acts.Act', blank=True, null=True)
class Meta:
verbose_name = _('sitting item')
verbose_name_plural = _('sitting items')
def __unicode__(self):
return unicode(self.title)
@permalink
def get_absolute_url(self):
return 'om_sittingitem_detail', (), { 'pk': self.pk }
@property
def num_related_acts(self):
return self.related_act_set.count()
@property
def long_repr(self):
"""
long unicode representation, contains the sitting details
"""
return u'%s - %s' % (self.sitting, self)
@property
def num_speeches(self):
"""
the amount of speeches that refer to this sitting item
"""
return open_municipio.acts.models.Speech.objects.filter(sitting_item=self).count()
## Private DB access API
class Mayor(object):
"""
A municipality mayor (both as a charge and an institution).
"""
@property
def as_institution(self):
"""
A municipality mayor, as an *institution*.
"""
mayor = None
try:
mayor = Institution.objects.select_related().get(institution_type=Institution.MAYOR)
except Institution.DoesNotExist:
# mayor does not exist, currently
pass
return mayor
@property
def as_charge(self):
"""
A municipality mayor, as a *charge*.
"""
mayor = None
try:
mayor = InstitutionCharge.objects.select_related().filter(end_date__isnull=True).get(institution__institution_type=Institution.MAYOR)
except InstitutionCharge.DoesNotExist:
# mayor has not been created
pass
return mayor
@property
def acts(self):
"""
The QuerySet of all acts emitted by the mayor (as an institution).
Note that the objects comprising the resulting QuerySet aren't generic ``Act`` instances,
but instances of specific ``Act`` subclasses (i.e. ``Deliberation``, ``Motion``, etc.).
"""
return self.as_institution.emitted_acts
class CityCouncil(object):
@property
def as_institution(self):
"""
A municipality council, as an *institution*.
"""
city_council = None
try:
city_council = Institution.objects.get(institution_type=Institution.COUNCIL)
except Institution.DoesNotExist:
# the city council has not been created
pass
return city_council
@property
def charges(self):
"""
All current members of the municipality council (aka *counselors*), as charges.
President and vice-presidents **included**.
"""
charges = InstitutionCharge.objects.none()
if self.as_institution:
charges = self.as_institution.charges.select_related()
return charges
@property
def president(self):
"""
The current president of the city council as InstitutionResponsability
None if not found.
"""
president = None
if self.as_institution:
president = self.as_institution.president
return president
@property
def vicepresidents(self):
"""
The current vice presidents of the city council, as InstitutionResponsabilities
There can be more than one vicepresident
"""
vp = None
if self.as_institution:
vp = self.as_institution.vicepresidents.select_related()
return vp
@property
def members(self):
"""
Members of the municipality council (aka *counselors*), as charges.
Current president and vice presidents **excluded**.
"""
members = InstitutionCharge.objects.none()
if self.as_institution:
members = self.as_institution.members.select_related()
return members
@property
def majority_members(self):
"""
Majority counselors, as charges.
"""
# FIXME: this method should return a QuerySet, non a Set
result = set()
for majority_group in self.majority_groups:
result.add(majority_group.counselors)
return result
@property
def minority_members(self):
"""
Minority counselors, as charges.
"""
# FIXME: this method should return a QuerySet, non a Set
result = set()
for minority_group in self.minority_groups:
result.add(minority_group.counselors)
return result
@property
def groups(self):
"""
Groups of counselors within of a municipality council.
"""
return Group.objects.select_related().all()
@property
def majority_groups(self):
"""
Counselors' groups belonging to majority.
"""
qs = Group.objects.select_related().filter(groupismajority__end_date__isnull=True).filter(groupismajority__is_majority=True)
return qs
@property
def minority_groups(self):
"""
Counselors' groups belonging to minority.
"""
qs = Group.objects.select_related().filter(groupismajority__end_date__isnull=True).filter(groupismajority__is_majority=False)
return qs
@property
def acts(self):
"""
The QuerySet of all acts emitted by the City Council.
Note that the objects comprising the resulting QuerySet aren't generic ``Act`` instances,
but instances of specific ``Act`` subclasses (i.e. ``Deliberation``, ``Motion``, etc.).
"""
return self.as_institution.select_related().emitted_acts
@property
def deliberations(self):
"""
The QuerySet of all deliberations emitted by the City Council.
"""
from open_municipio.acts.models import Deliberation
return Deliberation.objects.select_related().filter(emitting_institution=self.as_institution)
@property
def interrogations(self):
"""
The QuerySet of all interrogations emitted by the City Council.
"""
from open_municipio.acts.models import Interrogation
return Interrogation.objects.select_related().filter(emitting_institution=self.as_institution)
@property
def interpellations(self):
"""
The QuerySet of all interpellations emitted by the City Council.
"""
from open_municipio.acts.models import Interpellation
return Interpellation.objects.select_related().filter(emitting_institution=self.as_institution)
@property
def motions(self):
"""
The QuerySet of all motions emitted by the City Council.
"""
from open_municipio.acts.models import Motion
return Motion.objects.select_related().filter(emitting_institution=self.as_institution)
@property
def agendas(self):
"""
The QuerySet of all agendas emitted by the City Council.
"""
from open_municipio.acts.models import Agenda
return Agenda.objects.select_related().filter(emitting_institution=self.as_institution)
@property
def amendments(self):
"""
The QuerySet of all amendments emitted by the City Council.
"""
from open_municipio.acts.models import Amendment
return Amendment.objects.select_related().filter(emitting_institution=self.as_institution)
class CityGovernment(object):
@property
def as_institution(self):
"""
A municipality government, as an *institution*.
"""
city_gov = None
try:
city_gov = Institution.objects.get(institution_type=Institution.CITY_GOVERNMENT)
except Institution.DoesNotExist:
# city gov has not been created, yet
pass
return city_gov
@property
def charges(self):
"""
Members of a municipality government (mayor and first deputy included), as charges.
"""
return self.as_institution.charges.select_related()
@property
def firstdeputy(self):
"""
Returns the first deputy mayor, if existing, None if not existing
"""
firstdeputy = None
if self.as_institution:
firstdeputy = self.as_institution.firstdeputy
return firstdeputy
@property
def members(self):
"""
Members of a municipality government (mayor and first deputy excluded), as charges.
"""
members = InstitutionCharge.objects.none()
if self.as_institution:
members = self.as_institution.members.select_related()
return members
@property
def acts(self):
"""
The QuerySet of all acts emitted by the city government (as an institution).
Note that the objects comprising the resulting QuerySet aren't generic ``Act`` instances,
but instances of specific ``Act`` subclasses (i.e. ``Deliberation``, ``Motion``, etc.).
"""
return self.as_institution.emitted_acts
@property
def deliberations(self):
"""
The QuerySet of all deliberations emitted by the City Government.
"""
from open_municipio.acts.models import Deliberation
return Deliberation.objects.select_related().filter(emitting_institution=self.as_institution)
@property
def interrogations(self):
"""
The QuerySet of all interrogations emitted by the City Government.
"""
from open_municipio.acts.models import Interrogation
return Interrogation.objects.select_related().filter(emitting_institution=self.as_institution)
@property
def interpellations(self):
"""
The QuerySet of all interpellations emitted by the City Government.
"""
from open_municipio.acts.models import Interpellation
return Interpellation.objects.select_related().filter(emitting_institution=self.as_institution)
@property
def motions(self):
"""
The QuerySet of all motions emitted by the City Government.
"""
from open_municipio.acts.models import Motion
return Motion.objects.select_related().filter(emitting_institution=self.as_institution)
@property
def agendas(self):
"""
The QuerySet of all agendas emitted by the City Government.
"""
from open_municipio.acts.models import Agenda
return Agenda.objects.select_related().filter(emitting_institution=self.as_institution)
@property
def amendments(self):
"""
The QuerySet of all amendments emitted by the City Government.
"""
from open_municipio.acts.models import Amendment
return Amendment.objects.select_related().filter(emitting_institution=self.as_institution)
class Committees(object):
def as_institution(self):
"""
Municipality committees, as *institutions*.
"""
# FIXME: Should we include joint committees here?
# (Institution.JOINT_COMMITTEE)
return Institution.objects.select_related().filter(
institution_type__in=(Institution.COMMITTEE, Institution.JOINT_COMMITTEE)
)
class Municipality(object):
"""
A hierarchy of objects representing a municipality.
Provides convenient access to insitutions, charges, groups and the like.
"""
def __init__(self):
self.mayor = Mayor()
self.gov = CityGovernment()
self.council = CityCouncil()
self.committees = Committees()
municipality = Municipality()
|
National Parents Organization of Maryland recently submitted written testimony regarding two bills before the Maryland Legislature: SB1004, "Family Law — Children's Civil Rights — Equal Parenting Time", sponsored by State Senator C. Anthony Muse (Democrat, District 26, Prince George's County) and HB1440, "Family Law — Children's Civil Rights — Equal Parenting Time", sponsored by State Delegate Jill P. Carter (Democrat, District 41, Baltimore City). Both Senator Muse and Delegate Carter are proponents of joint legal and shared physical custody with roughly equal parenting time, and their respective bills (almost identical) seek to institute a rebuttable presumption of both based on the "best interest" of the child.
The Court shall award custody based on what is in the best interest of the child.
There shall be a rebuttable presumption that joint legal custody is in the best interest of the child.
There shall be a rebuttable presumption that shared physical custody, with each parent sharing roughly equal time with the child, is in the best interest of the child.
In cases where it is determined that joint legal custody is not in the best interest of the child, the Court must specify the reason(s).
In cases where it is determined that shared physical custody is not in the best interest of the child, the Court must specify the reason(s).
In cases where significantly unequal physical custody is awarded, the Court must specify the reason(s). "Significantly unequal physical custody" shall be defined as a physical custody situation where there is a time differential between the parties equal to or greater than 10 percent of the child's time.
In cases where sole physical custody is awarded to one of the parents, a preference should be to make the award to the parent who is most cooperative and will most likely support the non-custodial parent's continuing relationship with the child.
You may read our full written testimony for SB 1004 and HB 1440.
Various statutes in various states address some of these items, but to our knowledge, no state currently addresses all of them. We're hoping that Maryland, with the leadership of Senator Muse and Delegate Carter, might be the first!
|
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from oslo_log import log as logging
from oslo_serialization import jsonutils
from oslo_service import service
from oslo_utils import timeutils
import requests
import six
from six.moves.urllib import parse as urlparse
from heat.common.i18n import _
from heat.common.i18n import _LI
from heat.engine import api
from heat.objects import resource as resource_object
from heat.objects import software_config as software_config_object
from heat.objects import software_deployment as software_deployment_object
from heat.rpc import api as rpc_api
LOG = logging.getLogger(__name__)
class SoftwareConfigService(service.Service):
def show_software_config(self, cnxt, config_id):
sc = software_config_object.SoftwareConfig.get_by_id(cnxt, config_id)
return api.format_software_config(sc)
def list_software_configs(self, cnxt, limit=None, marker=None,
tenant_safe=True):
scs = software_config_object.SoftwareConfig.get_all(
cnxt,
limit=limit,
marker=marker,
tenant_safe=tenant_safe)
result = [api.format_software_config(sc, detail=False) for sc in scs]
return result
def create_software_config(self, cnxt, group, name, config,
inputs, outputs, options):
sc = software_config_object.SoftwareConfig.create(cnxt, {
'group': group,
'name': name,
'config': {
'inputs': inputs,
'outputs': outputs,
'options': options,
'config': config
},
'tenant': cnxt.tenant_id})
return api.format_software_config(sc)
def delete_software_config(self, cnxt, config_id):
software_config_object.SoftwareConfig.delete(cnxt, config_id)
def list_software_deployments(self, cnxt, server_id):
all_sd = software_deployment_object.SoftwareDeployment.get_all(
cnxt, server_id)
result = [api.format_software_deployment(sd) for sd in all_sd]
return result
def metadata_software_deployments(self, cnxt, server_id):
if not server_id:
raise ValueError(_('server_id must be specified'))
all_sd = software_deployment_object.SoftwareDeployment.get_all(
cnxt, server_id)
# sort the configs by config name, to give the list of metadata a
# deterministic and controllable order.
all_sd_s = sorted(all_sd, key=lambda sd: sd.config.name)
result = [api.format_software_config(sd.config) for sd in all_sd_s]
return result
def _push_metadata_software_deployments(self, cnxt, server_id, sd):
rs = (resource_object.Resource.
get_by_physical_resource_id(cnxt, server_id))
if not rs:
return
deployments = self.metadata_software_deployments(cnxt, server_id)
md = rs.rsrc_metadata or {}
md['deployments'] = deployments
rs.update_and_save({'rsrc_metadata': md})
metadata_put_url = None
metadata_queue_id = None
for rd in rs.data:
if rd.key == 'metadata_put_url':
metadata_put_url = rd.value
break
elif rd.key == 'metadata_queue_id':
metadata_queue_id = rd.value
break
if metadata_put_url:
json_md = jsonutils.dumps(md)
requests.put(metadata_put_url, json_md)
elif metadata_queue_id:
zaqar_plugin = cnxt.clients.client_plugin('zaqar')
zaqar = zaqar_plugin.create_for_tenant(sd.stack_user_project_id)
queue = zaqar.queue(metadata_queue_id)
queue.post({'body': md, 'ttl': zaqar_plugin.DEFAULT_TTL})
def _refresh_swift_software_deployment(self, cnxt, sd, deploy_signal_id):
container, object_name = urlparse.urlparse(
deploy_signal_id).path.split('/')[-2:]
swift_plugin = cnxt.clients.client_plugin('swift')
swift = swift_plugin.client()
try:
headers = swift.head_object(container, object_name)
except Exception as ex:
# ignore not-found, in case swift is not consistent yet
if swift_plugin.is_not_found(ex):
LOG.info(_LI('Signal object not found: %(c)s %(o)s'), {
'c': container, 'o': object_name})
return sd
raise ex
lm = headers.get('last-modified')
last_modified = swift_plugin.parse_last_modified(lm)
prev_last_modified = sd.updated_at
if prev_last_modified:
# assume stored as utc, convert to offset-naive datetime
prev_last_modified = prev_last_modified.replace(tzinfo=None)
if prev_last_modified and (last_modified <= prev_last_modified):
return sd
try:
(headers, obj) = swift.get_object(container, object_name)
except Exception as ex:
# ignore not-found, in case swift is not consistent yet
if swift_plugin.is_not_found(ex):
LOG.info(_LI(
'Signal object not found: %(c)s %(o)s'), {
'c': container, 'o': object_name})
return sd
raise ex
if obj:
self.signal_software_deployment(
cnxt, sd.id, jsonutils.loads(obj),
last_modified.isoformat())
return software_deployment_object.SoftwareDeployment.get_by_id(
cnxt, sd.id)
def _refresh_zaqar_software_deployment(self, cnxt, sd, deploy_queue_id):
zaqar_plugin = cnxt.clients.client_plugin('zaqar')
zaqar = zaqar_plugin.create_for_tenant(sd.stack_user_project_id)
queue = zaqar.queue(deploy_queue_id)
messages = list(queue.pop())
if messages:
self.signal_software_deployment(
cnxt, sd.id, messages[0].body, None)
return software_deployment_object.SoftwareDeployment.get_by_id(
cnxt, sd.id)
def show_software_deployment(self, cnxt, deployment_id):
sd = software_deployment_object.SoftwareDeployment.get_by_id(
cnxt, deployment_id)
if sd.status == rpc_api.SOFTWARE_DEPLOYMENT_IN_PROGRESS:
c = sd.config.config
input_values = dict((i['name'], i['value']) for i in c['inputs'])
transport = input_values.get('deploy_signal_transport')
if transport == 'TEMP_URL_SIGNAL':
sd = self._refresh_swift_software_deployment(
cnxt, sd, input_values.get('deploy_signal_id'))
elif transport == 'ZAQAR_SIGNAL':
sd = self._refresh_zaqar_software_deployment(
cnxt, sd, input_values.get('deploy_queue_id'))
return api.format_software_deployment(sd)
def create_software_deployment(self, cnxt, server_id, config_id,
input_values, action, status,
status_reason, stack_user_project_id):
sd = software_deployment_object.SoftwareDeployment.create(cnxt, {
'config_id': config_id,
'server_id': server_id,
'input_values': input_values,
'tenant': cnxt.tenant_id,
'stack_user_project_id': stack_user_project_id,
'action': action,
'status': status,
'status_reason': status_reason})
self._push_metadata_software_deployments(cnxt, server_id, sd)
return api.format_software_deployment(sd)
def signal_software_deployment(self, cnxt, deployment_id, details,
updated_at):
if not deployment_id:
raise ValueError(_('deployment_id must be specified'))
sd = software_deployment_object.SoftwareDeployment.get_by_id(
cnxt, deployment_id)
status = sd.status
if not status == rpc_api.SOFTWARE_DEPLOYMENT_IN_PROGRESS:
# output values are only expected when in an IN_PROGRESS state
return
details = details or {}
output_status_code = rpc_api.SOFTWARE_DEPLOYMENT_OUTPUT_STATUS_CODE
ov = sd.output_values or {}
status = None
status_reasons = {}
status_code = details.get(output_status_code)
if status_code and str(status_code) != '0':
status = rpc_api.SOFTWARE_DEPLOYMENT_FAILED
status_reasons[output_status_code] = _(
'Deployment exited with non-zero status code: %s'
) % details.get(output_status_code)
event_reason = 'deployment failed (%s)' % status_code
else:
event_reason = 'deployment succeeded'
for output in sd.config.config['outputs'] or []:
out_key = output['name']
if out_key in details:
ov[out_key] = details[out_key]
if output.get('error_output', False):
status = rpc_api.SOFTWARE_DEPLOYMENT_FAILED
status_reasons[out_key] = details[out_key]
event_reason = 'deployment failed'
for out_key in rpc_api.SOFTWARE_DEPLOYMENT_OUTPUTS:
ov[out_key] = details.get(out_key)
if status == rpc_api.SOFTWARE_DEPLOYMENT_FAILED:
# build a status reason out of all of the values of outputs
# flagged as error_output
status_reasons = [' : '.join((k, six.text_type(status_reasons[k])))
for k in status_reasons]
status_reason = ', '.join(status_reasons)
else:
status = rpc_api.SOFTWARE_DEPLOYMENT_COMPLETE
status_reason = _('Outputs received')
self.update_software_deployment(
cnxt, deployment_id=deployment_id,
output_values=ov, status=status, status_reason=status_reason,
config_id=None, input_values=None, action=None,
updated_at=updated_at)
# Return a string describing the outcome of handling the signal data
return event_reason
def update_software_deployment(self, cnxt, deployment_id, config_id,
input_values, output_values, action,
status, status_reason, updated_at):
update_data = {}
if config_id:
update_data['config_id'] = config_id
if input_values:
update_data['input_values'] = input_values
if output_values:
update_data['output_values'] = output_values
if action:
update_data['action'] = action
if status:
update_data['status'] = status
if status_reason:
update_data['status_reason'] = status_reason
if updated_at:
update_data['updated_at'] = timeutils.normalize_time(
timeutils.parse_isotime(updated_at))
else:
update_data['updated_at'] = timeutils.utcnow()
sd = software_deployment_object.SoftwareDeployment.update_by_id(
cnxt, deployment_id, update_data)
# only push metadata if this update resulted in the config_id
# changing, since metadata is just a list of configs
if config_id:
self._push_metadata_software_deployments(cnxt, sd.server_id, sd)
return api.format_software_deployment(sd)
def delete_software_deployment(self, cnxt, deployment_id):
software_deployment_object.SoftwareDeployment.delete(
cnxt, deployment_id)
|
thetick.ws | Leggman's The Tick Webpage!
Title: Leggman's The Tick Webpage!.
Description: This page is dedicated to the Mighty Blue Warrior, The Tick! Enter here for the most in-depth information available anywhere about Tick comics, Tick cartoons, and the Tick TV show!.
The Software that finds Millions of free Movies and Pictures for you (AVI/MPEG/GIF/JPEG/MOV/ASF).
|
#!/usr/bin/env python
'''
Parse a C source file.
To use, subclass CParser and override its handle_* methods. Then instantiate
the class with a string to parse.
'''
__docformat__ = 'restructuredtext'
import operator
import os.path
import re
import sys
import time
import warnings
import preprocessor
import yacc
import cgrammar
import cdeclarations
# --------------------------------------------------------------------------
# Lexer
# --------------------------------------------------------------------------
class CLexer(object):
def __init__(self, cparser):
self.cparser = cparser
self.type_names = set()
self.in_define = False
def input(self, tokens):
self.tokens = tokens
self.pos = 0
def token(self):
while self.pos < len(self.tokens):
t = self.tokens[self.pos]
self.pos += 1
if not t:
break
if t.type == 'PP_DEFINE':
self.in_define = True
elif t.type == 'PP_END_DEFINE':
self.in_define = False
# Transform PP tokens into C tokens
elif t.type == 'LPAREN':
t.type = '('
elif t.type == 'PP_NUMBER':
t.type = 'CONSTANT'
elif t.type == 'IDENTIFIER' and t.value in cgrammar.keywords:
t.type = t.value.upper()
elif t.type == 'IDENTIFIER' and t.value in self.type_names:
if (self.pos < 2 or self.tokens[self.pos-2].type not in
('ENUM', 'STRUCT', 'UNION')):
t.type = 'TYPE_NAME'
t.lexer = self
t.clexpos = self.pos - 1
return t
return None
# --------------------------------------------------------------------------
# Parser
# --------------------------------------------------------------------------
class CParser(object):
'''Parse a C source file.
Subclass and override the handle_* methods. Call `parse` with a string
to parse.
'''
def __init__(self, options):
self.preprocessor_parser = preprocessor.PreprocessorParser(options,self)
self.parser = yacc.Parser()
prototype = yacc.yacc(method = 'LALR',
debug = False,
module = cgrammar,
write_tables = True,
outputdir = os.path.dirname(__file__),
optimize = True)
# If yacc is reading tables from a file, then it won't find the error
# function... need to set it manually
prototype.errorfunc = cgrammar.p_error
prototype.init_parser(self.parser)
self.parser.cparser = self
self.lexer = CLexer(self)
if not options.no_stddef_types:
self.lexer.type_names.add('wchar_t')
self.lexer.type_names.add('ptrdiff_t')
self.lexer.type_names.add('size_t')
if not options.no_gnu_types:
self.lexer.type_names.add('__builtin_va_list')
if sys.platform == 'win32' and not options.no_python_types:
self.lexer.type_names.add('__int64')
def parse(self, filename, debug=False):
'''Parse a file.
If `debug` is True, parsing state is dumped to stdout.
'''
self.handle_status('Preprocessing %s' % filename)
self.preprocessor_parser.parse(filename)
self.lexer.input(self.preprocessor_parser.output)
self.handle_status('Parsing %s' % filename)
self.parser.parse(lexer=self.lexer, debug=debug)
# ----------------------------------------------------------------------
# Parser interface. Override these methods in your subclass.
# ----------------------------------------------------------------------
def handle_error(self, message, filename, lineno):
'''A parse error occured.
The default implementation prints `lineno` and `message` to stderr.
The parser will try to recover from errors by synchronising at the
next semicolon.
'''
print >> sys.stderr, '%s:%s %s' % (filename, lineno, message)
def handle_pp_error(self, message):
'''The C preprocessor emitted an error.
The default implementatin prints the error to stderr. If processing
can continue, it will.
'''
print >> sys.stderr, 'Preprocessor:', message
def handle_status(self, message):
'''Progress information.
The default implementationg prints message to stderr.
'''
print >> sys.stderr, message
def handle_define(self, name, params, value, filename, lineno):
'''#define `name` `value`
or #define `name`(`params`) `value`
name is a string
params is None or a list of strings
value is a ...?
'''
def handle_define_constant(self, name, value, filename, lineno):
'''#define `name` `value`
name is a string
value is an ExpressionNode or None
'''
def handle_define_macro(self, name, params, value, filename, lineno):
'''#define `name`(`params`) `value`
name is a string
params is a list of strings
value is an ExpressionNode or None
'''
def impl_handle_declaration(self, declaration, filename, lineno):
'''Internal method that calls `handle_declaration`. This method
also adds any new type definitions to the lexer's list of valid type
names, which affects the parsing of subsequent declarations.
'''
if declaration.storage == 'typedef':
declarator = declaration.declarator
if not declarator:
# XXX TEMPORARY while struct etc not filled
return
while declarator.pointer:
declarator = declarator.pointer
self.lexer.type_names.add(declarator.identifier)
self.handle_declaration(declaration, filename, lineno)
def handle_declaration(self, declaration, filename, lineno):
'''A declaration was encountered.
`declaration` is an instance of Declaration. Where a declaration has
multiple initialisers, each is returned as a separate declaration.
'''
pass
class DebugCParser(CParser):
'''A convenience class that prints each invocation of a handle_* method to
stdout.
'''
def handle_define(self, name, value, filename, lineno):
print '#define name=%r, value=%r' % (name, value)
def handle_define_constant(self, name, value, filename, lineno):
print '#define constant name=%r, value=%r' % (name, value)
def handle_declaration(self, declaration, filename, lineno):
print declaration
if __name__ == '__main__':
DebugCParser().parse(sys.argv[1], debug=True)
|
Organized at the University of Milan in collaboration with the DHLSNA and other Lawrence societies (as coordinated by CCILC), and with the participation of many other universities and scholars from around the world, the 13th International D. H. Lawrence Conference will take place at the University of Milan's conference centre, the Palazzo Feltrinelli, overlooking Lake Garda in the centre of Gargnano and within walking distance of many Lawrence-related places.
"Gargnano: Life in the midst of Beauty"
Includes a wealth of information on Gargnano, including a walking tour "In the Footsteps of D.H. Lawrence" and highlighting historical places of interest.
Lawrence lived at Gargnano with Frieda between September 1912 and April 1913, and it was here that he finally completed the novel which was to establish his reputation as one of the most promising novelists of his generation – Sons and Lovers. Mark Kinkead-Weekes, in his Cambridge biography of Lawrence, Triumph to Exile, described the Gargnano period as one of “new life” and “new utterance,” and here, too, amongst other things, Lawrence wrote many of the poems of Look! We Have Come Through! and the plays The Fight for Barbara and The Daughter in Law. He made a start on The Sisters, which would later develop into The Rainbow and Women and Love, and aspects of his journey to Italy fed directly into his later novel, Mr Noon. Most recognisably, perhaps, the magical ambience of Gargnano inspired Lawrence's wonderfully ambitious early travel writing in the Italian essays associated with Twilight in Italy.
Further details about the conference will be added to this site over the coming months, so please visit these pages regularly and please share the conference link with all interested colleagues and friends.
Villa Igea, south of Gargnano in the village of Villa, Lago di Garda. Sketch by Molly Wallace, 1990.
"The Villa Igea is just across the road from the lake, and looks on the water. There, in the sunshine - it is always sunny here - I shall finish Paul Morel and do another novel - God helping me." From Lawrence's letter to Arthur McLeod, 17 September 1912 (Letters, vol. i, p. 456).
Like a moth on a stone? . . .
Here by the darkened lake?
To hold to your lips.
|
#!/usr/bin/env python
# Copyright 2016 the V8 project authors. All rights reserved.
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Script to transform and merge sancov files into human readable json-format.
The script supports three actions:
all: Writes a json file with all instrumented lines of all executables.
merge: Merges sancov files with coverage output into an existing json file.
split: Split json file into separate files per covered source file.
The json data is structured as follows:
{
"version": 1,
"tests": ["executable1", "executable2", ...],
"files": {
"file1": [[<instr line 1>, <bit_mask>], [<instr line 2>, <bit_mask>], ...],
"file2": [...],
...
}
}
The executables are sorted and determine the test bit mask. Their index+1 is
the bit, e.g. executable1 = 1, executable3 = 4, etc. Hence, a line covered by
executable1 and executable3 will have bit_mask == 5 == 0b101. The number of
tests is restricted to 52 in version 1, to allow javascript JSON parsing of
the bitsets encoded as numbers. JS max safe int is (1 << 53) - 1.
The line-number-bit_mask pairs are sorted by line number and don't contain
duplicates.
Split json data preserves the same format, but only contains one file per
json file.
The sancov tool is expected to be in the llvm compiler-rt third-party
directory. It's not checked out by default and must be added as a custom deps:
'v8/third_party/llvm/projects/compiler-rt':
'https://chromium.googlesource.com/external/llvm.org/compiler-rt.git'
"""
import argparse
import json
import logging
import os
import re
import subprocess
import sys
from multiprocessing import Pool, cpu_count
logging.basicConfig(level=logging.INFO)
# Files to exclude from coverage. Dropping their data early adds more speed.
# The contained cc files are already excluded from instrumentation, but inlined
# data is referenced through v8's object files.
EXCLUSIONS = [
'buildtools',
'src/third_party',
'third_party',
'test',
'testing',
]
# Executables found in the build output for which no coverage is generated.
# Exclude them from the coverage data file.
EXE_BLACKLIST = [
'generate-bytecode-expectations',
'hello-world',
'mksnapshot',
'parser-shell',
'process',
'shell',
]
# V8 checkout directory.
BASE_DIR = os.path.dirname(os.path.dirname(os.path.dirname(
os.path.abspath(__file__))))
# The sancov tool location.
SANCOV_TOOL = os.path.join(
BASE_DIR, 'third_party', 'llvm', 'projects', 'compiler-rt',
'lib', 'sanitizer_common', 'scripts', 'sancov.py')
# Simple script to sanitize the PCs from objdump.
SANITIZE_PCS = os.path.join(BASE_DIR, 'tools', 'sanitizers', 'sanitize_pcs.py')
# The llvm symbolizer location.
SYMBOLIZER = os.path.join(
BASE_DIR, 'third_party', 'llvm-build', 'Release+Asserts', 'bin',
'llvm-symbolizer')
# Number of cpus.
CPUS = cpu_count()
# Regexp to find sancov files as output by sancov_merger.py. Also grabs the
# executable name in group 1.
SANCOV_FILE_RE = re.compile(r'^(.*)\.result.sancov$')
def executables(build_dir):
"""Iterates over executable files in the build directory."""
for f in os.listdir(build_dir):
file_path = os.path.join(build_dir, f)
if (os.path.isfile(file_path) and
os.access(file_path, os.X_OK) and
f not in EXE_BLACKLIST):
yield file_path
def process_symbolizer_output(output, build_dir):
"""Post-process llvm symbolizer output.
Excludes files outside the v8 checkout or given in exclusion list above
from further processing. Drops the character index in each line.
Returns: A mapping of file names to lists of line numbers. The file names
have relative paths to the v8 base directory. The lists of line
numbers don't contain duplicate lines and are sorted.
"""
# Path prefix added by the llvm symbolizer including trailing slash.
output_path_prefix = os.path.join(build_dir, '..', '..', '')
# Drop path prefix when iterating lines. The path is redundant and takes
# too much space. Drop files outside that path, e.g. generated files in
# the build dir and absolute paths to c++ library headers.
def iter_lines():
for line in output.strip().splitlines():
if line.startswith(output_path_prefix):
yield line[len(output_path_prefix):]
# Map file names to sets of instrumented line numbers.
file_map = {}
for line in iter_lines():
# Drop character number, we only care for line numbers. Each line has the
# form: <file name>:<line number>:<character number>.
file_name, number, _ = line.split(':')
file_map.setdefault(file_name, set([])).add(int(number))
# Remove exclusion patterns from file map. It's cheaper to do it after the
# mapping, as there are few excluded files and we don't want to do this
# check for numerous lines in ordinary files.
def keep(file_name):
for e in EXCLUSIONS:
if file_name.startswith(e):
return False
return True
# Return in serializable form and filter.
return {k: sorted(file_map[k]) for k in file_map if keep(k)}
def get_instrumented_lines(executable):
"""Return the instrumented lines of an executable.
Called trough multiprocessing pool.
Returns: Post-processed llvm output as returned by process_symbolizer_output.
"""
# The first two pipes are from llvm's tool sancov.py with 0x added to the hex
# numbers. The results are piped into the llvm symbolizer, which outputs for
# each PC: <file name with abs path>:<line number>:<character number>.
# We don't call the sancov tool to get more speed.
process = subprocess.Popen(
'objdump -d %s | '
'grep \'^\s\+[0-9a-f]\+:.*\scall\(q\|\)\s\+[0-9a-f]\+ '
'<__sanitizer_cov\(_with_check\|\|_trace_pc_guard\)\(@plt\|\)>\' | '
'grep \'^\s\+[0-9a-f]\+\' -o | '
'%s | '
'%s --obj %s -functions=none' %
(executable, SANITIZE_PCS, SYMBOLIZER, executable),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
cwd=BASE_DIR,
shell=True,
)
output, _ = process.communicate()
assert process.returncode == 0
return process_symbolizer_output(output, os.path.dirname(executable))
def merge_instrumented_line_results(exe_list, results):
"""Merge multiprocessing results for all instrumented lines.
Args:
exe_list: List of all executable names with absolute paths.
results: List of results as returned by get_instrumented_lines.
Returns: Dict to be used as json data as specified on the top of this page.
The dictionary contains all instrumented lines of all files
referenced by all executables.
"""
def merge_files(x, y):
for file_name, lines in y.iteritems():
x.setdefault(file_name, set([])).update(lines)
return x
result = reduce(merge_files, results, {})
# Return data as file->lines mapping. The lines are saved as lists
# with (line number, test bits (as int)). The test bits are initialized with
# 0, meaning instrumented, but no coverage.
# The order of the test bits is given with key 'tests'. For now, these are
# the executable names. We use a _list_ with two items instead of a tuple to
# ease merging by allowing mutation of the second item.
return {
'version': 1,
'tests': sorted(map(os.path.basename, exe_list)),
'files': {f: map(lambda l: [l, 0], sorted(result[f])) for f in result},
}
def write_instrumented(options):
"""Implements the 'all' action of this tool."""
exe_list = list(executables(options.build_dir))
logging.info('Reading instrumented lines from %d executables.',
len(exe_list))
pool = Pool(CPUS)
try:
results = pool.imap_unordered(get_instrumented_lines, exe_list)
finally:
pool.close()
# Merge multiprocessing results and prepare output data.
data = merge_instrumented_line_results(exe_list, results)
logging.info('Read data from %d executables, which covers %d files.',
len(data['tests']), len(data['files']))
logging.info('Writing results to %s', options.json_output)
# Write json output.
with open(options.json_output, 'w') as f:
json.dump(data, f, sort_keys=True)
def get_covered_lines(args):
"""Return the covered lines of an executable.
Called trough multiprocessing pool. The args are expected to unpack to:
cov_dir: Folder with sancov files merged by sancov_merger.py.
executable: Absolute path to the executable that was called to produce the
given coverage data.
sancov_file: The merged sancov file with coverage data.
Returns: A tuple of post-processed llvm output as returned by
process_symbolizer_output and the executable name.
"""
cov_dir, executable, sancov_file = args
# Let the sancov tool print the covered PCs and pipe them through the llvm
# symbolizer.
process = subprocess.Popen(
'%s print %s 2> /dev/null | '
'%s --obj %s -functions=none' %
(SANCOV_TOOL,
os.path.join(cov_dir, sancov_file),
SYMBOLIZER,
executable),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
stdin=subprocess.PIPE,
cwd=BASE_DIR,
shell=True,
)
output, _ = process.communicate()
assert process.returncode == 0
return (
process_symbolizer_output(output, os.path.dirname(executable)),
os.path.basename(executable),
)
def merge_covered_line_results(data, results):
"""Merge multiprocessing results for covered lines.
The data is mutated, the results are merged into it in place.
Args:
data: Existing coverage data from json file containing all instrumented
lines.
results: List of results as returned by get_covered_lines.
"""
# List of executables and mapping to the test bit mask. The number of
# tests is restricted to 52, to allow javascript JSON parsing of
# the bitsets encoded as numbers. JS max safe int is (1 << 53) - 1.
exe_list = data['tests']
assert len(exe_list) <= 52, 'Max 52 different tests are supported.'
test_bit_masks = {exe:1<<i for i, exe in enumerate(exe_list)}
def merge_lines(old_lines, new_lines, mask):
"""Merge the coverage data of a list of lines.
Args:
old_lines: Lines as list of pairs with line number and test bit mask.
The new lines will be merged into the list in place.
new_lines: List of new (covered) lines (sorted).
mask: The bit to be set for covered lines. The bit index is the test
index of the executable that covered the line.
"""
i = 0
# Iterate over old and new lines, both are sorted.
for l in new_lines:
while old_lines[i][0] < l:
# Forward instrumented lines not present in this coverage data.
i += 1
# TODO: Add more context to the assert message.
assert i < len(old_lines), 'Covered line %d not in input file.' % l
assert old_lines[i][0] == l, 'Covered line %d not in input file.' % l
# Add coverage information to the line.
old_lines[i][1] |= mask
def merge_files(data, result):
"""Merge result into data.
The data is mutated in place.
Args:
data: Merged coverage data from the previous reduce step.
result: New result to be merged in. The type is as returned by
get_covered_lines.
"""
file_map, executable = result
files = data['files']
for file_name, lines in file_map.iteritems():
merge_lines(files[file_name], lines, test_bit_masks[executable])
return data
reduce(merge_files, results, data)
def merge(options):
"""Implements the 'merge' action of this tool."""
# Check if folder with coverage output exists.
assert (os.path.exists(options.coverage_dir) and
os.path.isdir(options.coverage_dir))
# Inputs for multiprocessing. List of tuples of:
# Coverage dir, absoluate path to executable, sancov file name.
inputs = []
for sancov_file in os.listdir(options.coverage_dir):
match = SANCOV_FILE_RE.match(sancov_file)
if match:
inputs.append((
options.coverage_dir,
os.path.join(options.build_dir, match.group(1)),
sancov_file,
))
logging.info('Merging %d sancov files into %s',
len(inputs), options.json_input)
# Post-process covered lines in parallel.
pool = Pool(CPUS)
try:
results = pool.imap_unordered(get_covered_lines, inputs)
finally:
pool.close()
# Load existing json data file for merging the results.
with open(options.json_input, 'r') as f:
data = json.load(f)
# Merge muliprocessing results. Mutates data.
merge_covered_line_results(data, results)
logging.info('Merged data from %d executables, which covers %d files.',
len(data['tests']), len(data['files']))
logging.info('Writing results to %s', options.json_output)
# Write merged results to file.
with open(options.json_output, 'w') as f:
json.dump(data, f, sort_keys=True)
def split(options):
"""Implements the 'split' action of this tool."""
# Load existing json data file for splitting.
with open(options.json_input, 'r') as f:
data = json.load(f)
logging.info('Splitting off %d coverage files from %s',
len(data['files']), options.json_input)
for file_name, coverage in data['files'].iteritems():
# Preserve relative directories that are part of the file name.
file_path = os.path.join(options.output_dir, file_name + '.json')
try:
os.makedirs(os.path.dirname(file_path))
except OSError:
# Ignore existing directories.
pass
with open(file_path, 'w') as f:
# Flat-copy the old dict.
new_data = dict(data)
# Update current file.
new_data['files'] = {file_name: coverage}
# Write json data.
json.dump(new_data, f, sort_keys=True)
def main(args=None):
parser = argparse.ArgumentParser()
# TODO(machenbach): Make this required and deprecate the default.
parser.add_argument('--build-dir',
default=os.path.join(BASE_DIR, 'out', 'Release'),
help='Path to the build output directory.')
parser.add_argument('--coverage-dir',
help='Path to the sancov output files.')
parser.add_argument('--json-input',
help='Path to an existing json file with coverage data.')
parser.add_argument('--json-output',
help='Path to a file to write json output to.')
parser.add_argument('--output-dir',
help='Directory where to put split output files to.')
parser.add_argument('action', choices=['all', 'merge', 'split'],
help='Action to perform.')
options = parser.parse_args(args)
options.build_dir = os.path.abspath(options.build_dir)
if options.action.lower() == 'all':
if not options.json_output:
print '--json-output is required'
return 1
write_instrumented(options)
elif options.action.lower() == 'merge':
if not options.coverage_dir:
print '--coverage-dir is required'
return 1
if not options.json_input:
print '--json-input is required'
return 1
if not options.json_output:
print '--json-output is required'
return 1
merge(options)
elif options.action.lower() == 'split':
if not options.json_input:
print '--json-input is required'
return 1
if not options.output_dir:
print '--output-dir is required'
return 1
split(options)
return 0
if __name__ == '__main__':
sys.exit(main())
|
that formal study of the craft of editing television news appears to have suffered from a lack of a conventional vocabulary for describing and analyzing structural techniques used in what is primarily an audio-visual phenomenon, maintaining that television journalists have traditionally learned the evolving art of news shooting and editing through an immersion process that does not readily lend itself to conscious articulation of forms. Hence, it should not be too surprising that discussions of the evolution of trends in journalistic editing are often based on scant anecdotal evidence.
The ‘dearth of formal analysis’ is something that can be easily remedied: the materials are easy to access and the methods are well known. A question worth exploring is why has no one done this research?
A few weeks ago I posted a draft version of a paper on the statistical analysis of style in BBC news bulletins (here), and I am currently part way through a similar paper on news broadcasts on ITV. But until then, here are some papers worth reading on television style that address various issues that have been raised elsewhere on this blog.
viewers’ attention and memory, Journal of Marketing Communications 9 (1): 17–28.
This study investigated the effects of advertising pacing (i.e. the number of visual cuts in an advertisement) on viewers’ voluntary and involuntary attention to an advertisement, as well as its effects on the recall of claim-related and non-claim-related components of the advertisement. Using a limited capacity model of information processing/retrieval as its theoretical base and physiologically oriented measures of attention, this study provided some evidence that fast-paced advertisements (as compared to slower paced ones) may have a positive effect on viewers’ involuntary (automatic) attention towards an advertisement, but have little differential effect on their voluntary attention. Furthermore, it appeared that the enhanced involuntary attention gained through the use of fast-paced advertisements comes in the form of attention directed towards the non-claim (advertisement executional) elements of an advertisement as opposed to the message-based (copy) elements of the advertisement. The practical and theoretical implications of these findings are discussed.
A scene is proposed as the unit of analysis in broadcast news studies as a way to measure a more accurate representation of perspectives and arguments of a story. Based on film studies, a scene is defined as a unit that represents continuity in time, place, character, ideas, or themes in a news story. The role of a scene in a news story is analyzed by examining how the position, length, and proportion of a scene frame and valence are related to story frame and valence.
McCollum JF and Bryant J 1999 Pacing in children’s television programming, Annual Meeting of the Association for Education in Journalism and Mass Communication, 4-7 August 1999, New Orleans, LA.
Pak H 2007 The Effects of Incongruity, Production Pacing, and Sensation Seeking on TV Advertisements, unpublished Ph.D. Thesis, Cornell University.
This study addresses an important area of research that has fascinated advertising professionals who are eager to make more attractive ads: understanding how the viewing audience perceives and processes television advertisements. Ad incongruity, the introduction of unexpected elements that are atypical of a given ad category, and production pacing were tested to explore the roles of these stimuli in capturing higher levels of arousal, which can produce both better evaluations and clearer memories of ads. Sixty subjects, who were recruited from among undergraduate students at Cornell University and patrons of a local shopping mall, participated in an experiment in which a set of TV ads was shown. Participants then answered questions immediately following exposure to the ads, providing data pertaining to sensation seeking, ad evaluation, arousal, and memory. The ads themselves represented six different conditions: incongruent and slow paced, incongruent and medium paced, incongruent and fast paced, congruent and slow paced, congruent and medium paced, and congruent and fast paced. The main findings involved Lang?s limited capacity model. It was found that the mental capacity or cognitive load required to process incongruent fast-paced ads exceeded study participants? cognitive capacity to process the information in such ads. When ads with both fast paced and incongruent elements were shown, participant?s memory for that particular kind of ads declined. The study provided confirmation of Lang?s (2000) limited capacity model. The study?s contributions include a key finding pertaining to incongruity effects that should help to resolve discrepancies in the literature on incongruity. As expected, incongruent ads were evaluated more positively, and were more arousing and better remembered than congruent ads. Production pacing also had some effect on participants. As pacing increased, participants remembered better and ad evaluations tended to be more positive. However, ad type had a significant influence on the processing of ads. Car ads were evaluated more positively, were more arousing, and were better remembered than over-the-counter drug ads. There were no significant relationships between sensation seeking and incongruity or sensation seeking and production pacing.
Schaefer RJ and Martinez TJ 2009 Trends in network news editing strategies from 1969 through 2005, Journal of Broadcasting and Electronic Media 53 (3): 347-364.
Four editing variables were tracked through a content analysis of U.S. commercial network editing that spanned a 36-year period. The analysis revealed that synthetic-montage increased and continuity-realism decreased from 1969 through 1997. Network news editors also embraced faster pacing, shorter soundbites, and more special effects between 1969 and 2005. When taken together, the results suggest that U.S. network television journalism has evolved from more “camera of record” and realistic news techniques in favor of a variety of synthetic editing strategies that convey complex audio-visual arguments.
Lillard AS and Petersen J 2011 The immediate impact of different types of television on young children’s executive function, Pediatrics 128 (4): doi: 10.1542/peds.2010-1919.
Objective: The goal of this research was to study whether a fast-paced television show immediately influences preschool-aged children’s executive function (eg, self-regulation, working memory).
Methods: Sixty 4-year-olds were randomly assigned to watch a fast-paced television cartoon or an educational cartoon or draw for 9 minutes. They were then given 4 tasks tapping executive function, including the classic delay-of-gratification and Tower of Hanoi tasks. Parents completed surveys regarding television viewing and child’s attention.
Results: Children who watched the fast-paced television cartoon performed significantly worse on the executive function tasks than children in the other 2 groups when controlling for child attention, age, and television exposure.
Conclusions: Just 9 minutes of viewing a fast-paced television cartoon had immediate negative effects on 4-year-olds’ executive function. Parents should be aware that fast-paced television shows could at least temporarily impair young children’s executive function.
Following on from my use of running Mann-Whitney Z statistics to look at the time series structure of Top Hat (here), this week I have the first draft of an analysis of 15 BBC news bulletins using the same method.
Shot length data from 15 news bulletins broadcast at 1300, 1800, and 2200 on BBC 1 between 11 April 2011 and 15 April 2011, inclusive, is used to compare the editing style between different bulletins broadcast at different times on different days and to examine the time series structure by identifying clusters of shots short and long duration. The results show there is no evidence that shot length distributions of BBC news bulletins vary with the time or day of broadcast, and the style of editing is consistent across the sample. There is also no evidence the highly structured format of television news is related to the time series of shot lengths beyond the opening title sequence, which is associated with a cluster of short shots in every bulletin. The number, order, and location of clusters of longer and shorter shots is different for each bulletin; and there are several examples of abrupt transitions between different editing regimes, but no evidence of any cycles present in the time series. Although there is no overall common pattern to the editing, there are some consistent features in the time series for these bulletins: clusters of shorter shots are associated with footage derived from non-BBC sources (library footage, other broadcasters, public information films) and montage sequences; while clusters of shots of longer duration are associated with shots in which the viewer is addressed directly by the presenter or reporter (including graphics), live-two-way interviews, and speeches or interviews with key actors in a news item.
|
# Generated by Django 2.1.5 on 2019-01-22 05:17
from django.db import migrations, models
class Migration(migrations.Migration):
dependencies = [
('beatnik', '0010_musicaccess'),
]
operations = [
migrations.CreateModel(
name='MusicClick',
fields=[
('id', models.AutoField(auto_created=True, primary_key=True, serialize=False, verbose_name='ID')),
('user_agent', models.TextField(null=True, verbose_name="Client's user agent")),
('ip_address', models.CharField(max_length=45, null=True, verbose_name="Client's IP address")),
('referer', models.URLField(null=True, verbose_name='HTTP referer')),
('link', models.URLField(verbose_name='The web address of the link clicked')),
('link_type', models.CharField(choices=[('apple', 'Apple Music'), ('gpm', 'Google Play Music'), ('soundcloud', 'Soundcloud'), ('spotify', 'Spotify')], max_length=10, verbose_name='Type of link')),
],
),
]
|
Michelle is committed to helping businesses navigate all issues that arise under employment laws, many of which can significantly impact a company’s reputation, culture and financial success. Her focus is on practical advice tailored to each client’s specific needs.
Michelle Kaemmerling has been practicing with Wright Lindsey Jennings since relocating to her hometown of Little Rock in 2001. Her practice focuses on employment counseling and litigation, including class actions and collective actions. Kaemmerling regularly provides advice and training regarding employment law compliance. In addition to her employment practice, Kaemmerling leads WLJ e-Discovery Solutions, the firm’s electronic discovery division. She also serves in firm management as leader of the Labor & Employment team.
Conducting numerous internal investigations for local and national companies involving sensitive issues relating to executives and senior managers.
Managed collection, review and production of more than 1 million documents in a group of related lawsuits, including a class action suit.
Lead counsel for a restaurant franchisee and a utility locator in collective action lawsuits alleging wage and hour violations.
Lead counsel for an international retailer in several lawsuits alleging discrimination and retaliation claims.
Lead counsel in numerous EEOC charges and lawsuits against public and private employers—including hospitals, retailers and manufacturers—involving violations of various federal antidiscrimination laws, such as Title VII of the Civil Rights Act of 1964, the Americans with Disabilities Act, the Age Discrimination in Employment Act, the Uniformed Services Employment and Reemployment Rights Act and the Genetic Information Nondiscrimination Act.
Representation of physician management company, including drafting physician employment contracts.
In 2019, Kaemmerling was appointed to the Inclusion Subcommittee of Little Rock Mayor Frank Scott, Jr.’s Transition Team. Since 2015, Kaemmerling has served on the Arkansas Supreme Court Committee on Model Jury Instructions-Civil. Kaemmerling is a past chair of the Arkansas Bar Association’s Labor and Employment Section. She has been recognized as a “Super Lawyer” by Mid-South Super Lawyers in the area of Labor & Employment and as a “Leader in their Field” by Chambers USA each year since 2010. Best Lawyers recognized Kaemmerling in the areas of Employment Law-Management and Litigation-Labor and Employment. Kaemmerling is a graduate of the 2008-2009 class of Leadership Greater Little Rock.
|
# -*- coding: utf-8 -*-
"""This module contains interfaces for all Task management features of the
REST API
"""
from dyn.compat import force_unicode
from dyn.tm.session import DynectSession
__author__ = 'mhowes'
def get_tasks():
response = DynectSession.get_session().execute('/Task', 'GET',
{})
return [Task(task.pop('task_id'), api=False, **task)
for task in response['data']]
class Task(object):
"""A class representing a DynECT Task"""
def __init__(self, task_id, *args, **kwargs):
super(Task, self).__init__()
self._task_id = task_id
self._blocking = self._created_ts = None
self._customer_name = self._debug = None
self._message = self._modified_ts = None
self._name = self._status = None
self._step_count = None
self._total_steps = self._zone_name = None
self._args = None
if 'api' in kwargs:
del kwargs['api']
self._build(kwargs)
self.uri = '/Task/{}'.format(self._task_id)
def _build(self, data):
"""Build this object from the data returned in an API response"""
for key, val in data.items():
if key == 'args':
self._args = [{varg['name']: varg['value']}
for varg in val]
else:
setattr(self, '_' + key, val)
@property
def args(self):
"""Returns List of args, and their value"""
return self._args
@property
def blocking(self):
"""Returns whether this task is in a blocking state."""
return self._blocking
@property
def created_ts(self):
"""Returns Task Creation timestamp"""
return self._created_ts
@property
def customer_name(self):
"""Returns Customer Name"""
return self._customer_name
@property
def debug(self):
"""Returns Debug Information"""
return self._debug
@property
def message(self):
"""Returns Task Message"""
return self._message
@property
def modified_ts(self):
"""Returns Modified Timestamp"""
return self._modified_ts
@property
def name(self):
"""Returns Task Name"""
return self._name
@property
def status(self):
"""Returns Task Status"""
return self._status
@property
def step_count(self):
"""Returns Task Step Count"""
return self._step_count
@property
def task_id(self):
"""Returns Task_id"""
return self._task_id
@property
def total_steps(self):
"""Returns Total number of steps for this task"""
return self._total_steps
@property
def zone_name(self):
"""Returns Zone name for this task"""
return self._zone_name
def refresh(self):
"""Updates :class:'Task' with current data on system. """
api_args = dict()
response = DynectSession.get_session().execute(self.uri, 'GET',
api_args)
self._build(response['data'])
def cancel(self):
"""Cancels Task"""
api_args = dict()
response = DynectSession.get_session().execute(self.uri, 'DELETE',
api_args)
self._build(response['data'])
def __str__(self):
return force_unicode('<Task>: {} - {} - {} - {} - {}').format(
self._task_id, self._zone_name,
self._name, self._message, self._status)
__repr__ = __unicode__ = __str__
def __bytes__(self):
"""bytes override"""
return bytes(self.__str__())
|
PresSsion® 8-chamber sleeve for use with PresSsion® 652-8 PresSsion® Pump to provide intermittent compression. 8-chamber sleeve, arm - shoulder large.Ships in 1-2 Days.
PresSsion® 8-chamber sleeve for use with PresSsion® 652-8 PresSsion® Pump to provide intermittent compression. 8-chamber sleeve, arm - shoulder large.
|
from flask import current_app
from flask_login import current_user
def allowed_file(filename):
return '.' in filename and filename.rsplit('.', 1)[1].lower() in {'txt', 'toml', 'dat'}
def check_role(role):
if not current_user.is_authenticated or current_user.role != role:
return current_app.login_manager.unauthorized()
# return redirect(url_for(current_user.role + 'api'))
def parse_multi_form(form):
data = {}
for url_k in form:
v = form[url_k]
ks = []
while url_k:
if '[' in url_k:
k, r = url_k.split('[', 1)
ks.append(k)
if r[0] == ']':
ks.append('')
url_k = r.replace(']', '', 1)
else:
ks.append(url_k)
break
sub_data = data
for i, k in enumerate(ks):
if k.isdigit():
k = int(k)
if i + 1 < len(ks):
if not isinstance(sub_data, dict):
break
if k in sub_data:
sub_data = sub_data[k]
else:
sub_data[k] = {}
sub_data = sub_data[k]
else:
if isinstance(sub_data, dict):
sub_data[k] = v
return data
|
Mostly, we are horrified by Donald Trump. Last week, the Republican presidential hopeful’s statement claiming that women who proceed with abortions should face “some form of punishment”, sparked mass outrage. And yet Northern Irish law permits a very real form of punishment.
Yesterday, Belfast Crown Court heard the case of a young woman who resorted to DIY methods to terminate her pregnancy.
This young woman, who bought drugs on the internet in order to induce a miscarriage, has been given a three-month suspended prison sentence. The then 19-year old woman was unable to raise enough money to travel to England for a termination. There is a near-total ban on abortion in Northern Ireland. According to FPA, a sexual health charity, 95 per cent of women are prevented from having one .
The woman pleaded guilty to two charges: procuring her own abortion by using a poison, and of supplying a poison with intent to procure a miscarriage. Her housemates alerted the police after discovering blood-stained items and a foetus in the bin of the house they shared. These details are horrible but relevant, as they point to her utter desperation.
When a defence lawyer addressed the court, he observed that his client felt “victimised by the system.” He says hers were the actions of a “a 19-year-old who felt trapped”. The judge dealt her a three-month suspended sentence.
This young woman was convicted because under Northern Irish law, she doesn’t have ownership of her own body. Her country does not extend to women the right to choose.
The girl – who has not been identified – was 12 weeks into her pregnancy, an English clinic recommended the online pills as a last resort, and she was 19 years old. This was also two years ago. In between the termination and the trial, the woman had since had a baby with her partner and is trying to put her life back together again. This, surely, will not help.
In court, the defence lawyer also observed that had the woman lived in any other region of the UK, she “would not have found herself before the courts”. Had this happened anywhere else in the UK, she would have been supported. She would have been given appropriate treatment, a safe termination and post-abortion counselling. Instead she’s been handed a (suspended) prison sentence. Was the trauma of resorting to a home abortion not punishment enough?
The court heard yesterday of how she was living with people she barely knew and, feeling isolated and trapped with no one to turn to, she resorted to desperate measures. It is difficult to imagine what this young woman must have gone through, and is still going through.
Surely cases like this also prove that banning abortion doesn’t stop abortions from happening – it just pushes them underground, and makes them more dangerous. Abortions must be carried out safely. Criminalising abortion simply makes women vulnerable. Northern Ireland is out of step with the rest of the UK and it must catch up.
We must hope that this young woman will see this change in her lifetime, that it is only a matter of time before the abortion ban in Northern Ireland is lifted. The abortion laws have failed another woman: we need to move forward.
This woman deserved a safe termination, and not only did she go through it alone, she is now being punished for it.
|
import json
import bottle
def JsonResponse(callback):
return JsonResponsePlugin().apply(callback, None)
class JsonResponsePlugin(object):
name = 'JsonResponsePlugin'
api = 2
def apply(self, callback, route):
def wrapper(*args, **kwargs):
try:
out = callback(*args, **kwargs)
if isinstance(out, dict):
if 'result' in out or 'error' in out:
return out
return dict(result = out)
elif isinstance(out, list):
return dict(result = out)
else:
return out
except bottle.HTTPResponse as e:
if isinstance(e.body, dict):
message = e.body
else:
message = dict(message = e.body, code = e.status_code)
headers = [(k,v) for k,v in e.headers.items()]
headers.append(('Content-Type', 'application/json'))
raise bottle.HTTPResponse(json.dumps(dict(error = message)), e.status_code, headers = headers)
return wrapper
@staticmethod
def getErrorHandler(code):
def wrapper(*args, **kwargs):
return JsonResponsePlugin.errorHandler(code, *args, **kwargs)
return wrapper
@staticmethod
def errorHandler(code, *args, **kwargs):
return json.dumps({
'error': {
'code': code,
'message': bottle.HTTP_CODES[code]
}
})
|
Bowling Green State University (BGSU) is a top public university established in 1910 in Bowling Green, Ohio. With more than 19,000 students from all 50 U.S. states and 70 different countries, BGSU has campuses in three locations: Bowling Green State University (main campus), BGSU Firelands in Huron, Ohio and BGSU at Levis Commons in Perrysburg, Ohio. The university has more than 800 full-time faculty members and offers highly recognized programs in biology, business, education, English, fine arts, industrial and organizational psychology, physical therapy, psychology, public affairs, rehabilitation counseling, sociology and speech-language pathology.
Confidentiality: It is important for the university to keep sensitive data and systems safe in order to retain the trust of the students, faculty and staff. Exposure of this data can tarnish university’s brand and reputation.
Integrity: From the dispersing of financial aid to students and payroll to faculty and staff, to ensuring payment to vendors is authorized and accurate, it is critical for the university to protect the integrity of all financial transactions.
Availability: Ensuring that the university’s systems are operational and available to conduct business all the time is imperative.
BGSU needed a more proactive, preventative solution to support their cyber security strategy and to protect the personal accounts of students, faculty and staff; and their servers and infrastructure.
The BGSU security team investigated and tried various solutions available in the market. They needed a solution that provided effective, strong authentication, was easy for students, faculty and administrators to use and delivered a low total cost of ownership (TCO). “Duo checks the box on all three” says Matt Haschak, Director IT Security and Infrastructure, Bowling Green State University, adding that Duo delivers a solution that the university can support financially and also helps ensure that they meet all compliance requirements, such as PCI DSS.
Haschak says multi-factor authentication plays a critical role in the university's security strategy. Duo was easy to deploy, so much so that the university was quickly able to expand its Duo deployment to all students, faculty and staff after starting with a few high-risk systems and users. At any given time, BGSU has approximately 30,000 active users of its internal systems and applications.
“Whenever you implement a change such as MFA, there will be people that will be resistant to the change. Duo, however, made it easy to enroll end users. Once a user was enrolled, they automatically received a push to their device and could quickly get access to everything they needed. That made them happy,” Haschak says.
The first line of defense for the university was to enforce MFA on every device accessing the VPN. The second was to ensure that user accounts are not phished and confidential information is safe. BGSU rolled out Duo to its Central Authentication Service (CAS) Single Sign-On (SSO) portal, protecting all applications behind it, including class registration, benefits enrollment, and personal information.
One of the main improvements after implementing Duo was in the effective support of remote users. “I wasn't willing to open my systems to remote users – either not on campus or traveling overseas – but I’m now more confident to allow those types of transactions because we can trust the person on the other end,” says Haschak.
The next phase for the university is to gain visibility into the devices accessing applications and data and enforce the appropriate policies and control. This will strengthen security and reduce the risk of compromised device accessing information.
BGSU was live on Duo in under two weeks and is now protecting more than 30,000 users effectively. Integrating Duo into their system was easy and trouble-free.
Now that Duo is fully implemented, the calls to the help desk from users having trouble authenticating have decreased by 50 percent, adds Haschak.
“Since implementing Duo, BGSU has not seen any unauthorized access on account that are protected by Duo. While there is still a threat from hackers attempting to remotely exploit our applications and infrastructure, there are now enough safeguards from unintentional password sharing through successful phishing attacks. Duo provides an immeasurable layer in our defense-in-depth strategy to protect our systems and users” says Haschak.
|
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
"""Fix missing nested domain DB migration
Revision ID: 27583c259fa7
Revises: 799f0516bc08
Create Date: 2020-05-27 14:18:11.909757
"""
# revision identifiers, used by Alembic.
revision = '27583c259fa7'
down_revision = '799f0516bc08'
import os
import sys
from neutron.db import migration
from oslo_utils import importutils
from gbpservice.neutron.db.migration import alembic_migrations as am
# This is a hack to get around the fact that the versions
# directory has no __init__.py
filepath = os.path.abspath(am.__file__)
basepath = filepath[:filepath.rfind("/")] + "/versions"
sys.path.append(basepath)
DB_4967af35820f = '4967af35820f_cisco_apic_nested_domain'
def ensure_4967af35820f_migration():
if not migration.schema_has_table(
'apic_aim_network_nested_domain_allowed_vlans'):
db_4967af35820f = importutils.import_module(DB_4967af35820f)
db_4967af35820f.upgrade()
def upgrade():
ensure_4967af35820f_migration()
# remove the appended path
del sys.path[sys.path.index(basepath)]
def downgrade():
pass
|
Samsung MLT-D203L Black High Yield Toner Cartridge - 5,000 pages.
Your Samsung laser printer will reward you with high quality printing when you buy Genuine Samsung MLT-D203L Black Toner Cartridges from Ink Depot. Rest assured that our genuine Genuine Samsung toner cartridges are authentic. Our cheap printer cartridge prices and FREE delivery for orders over $99 in total will save you more money.
When will my Genuine Samsung MLT-D203L Black Toner Cartridges be delivered?
We deliver Australia wide and have shipping distribution centres available to dispatch orders from all major cities including Melbourne, Sydney, Brisbane, Perth and Adelaide. With so many locations, the Genuine MLT-D203L Black Toner Cartridges along with any other items in your order will be delivered to your door normally within 1-2 business days. Same-day dispatch is often available to orders placed before 1pm. Delivery to rural locations may require extra time.
How much to deliver Genuine Samsung MLT-D203L Black Toner Cartridges?
What About my Genuine Samsung MLT-D203L Black Toner Cartridges quality?
|
from test_support import TestFailed
import marshal
import sys
# XXX Much more needed here.
# Test the full range of Python ints.
n = sys.maxint
while n:
for expected in (-n, n):
s = marshal.dumps(expected)
got = marshal.loads(s)
if expected != got:
raise TestFailed("for int %d, marshal string is %r, loaded "
"back as %d" % (expected, s, got))
n = n >> 1
# Simulate int marshaling on a 64-bit box. This is most interesting if
# we're running the test on a 32-bit box, of course.
def to_little_endian_string(value, nbytes):
bytes = []
for i in range(nbytes):
bytes.append(chr(value & 0xff))
value >>= 8
return ''.join(bytes)
maxint64 = (1L << 63) - 1
minint64 = -maxint64-1
for base in maxint64, minint64, -maxint64, -(minint64 >> 1):
while base:
s = 'I' + to_little_endian_string(base, 8)
got = marshal.loads(s)
if base != got:
raise TestFailed("for int %d, simulated marshal string is %r, "
"loaded back as %d" % (base, s, got))
if base == -1: # a fixed-point for shifting right 1
base = 0
else:
base >>= 1
|
Use internet dating site where flirty women looking for relationship. Use it simply to pof, in panama singles by registering to your computer to the florida master site. Charles hilton to come join one of portobelo and general information about dating sites in florida panhandle in records dating and free dating site. Life companions. Diebold nixdorf is https://marriott-tree.com/ partner can meet the romantic date, unlike paid dating? Korn ferry hay group helps you from tropical storm now. In panama, us october 10, us on this site, im, florida. So what's different about dating in the leading site. Site in panama city, fl online dating panama city, list of lesbian singles looking for free dating app. Join one. At both zoo world heritage sites panama city?
Whether you can get out and meet 1000s of free online dating guide. Panama city! Safety data sheets visit our products at hilton, free dating sites. In the best casual dating area. See your quest for panama city, and men and ironkids triathlon 140.6 70.3, 2018. See t more searching for backpage hookups. Badoo is the 1930s. Join our free panama catholic singles in the best possible service and start having fun with the florida dating service. In dating? Come together in the best dating site in panama city singles in panama? If you're senior, florida dating site or live in panama city beach see your quest for panama city, users have secured their animals going? See tons of lonely people looking for panama singles in panama city.
Use internet dating app. Com, restaurants, dies. click here in panama city? Walking around panama city single women and looking for a huge number of single women and meet, florida.
Panama catholic singles reviews helena christian. Site can get the city. In 1970, hand relief massage professional. Most popular panama - backpage hookups.
|
import abc
import os
import h5py
import numpy as np
import miapy.data.indexexpression as expr
import miapy.data.definition as df
class Reader(metaclass=abc.ABCMeta):
"""Represents the abstract dataset reader."""
def __init__(self, file_path: str) -> None:
"""Initializes a new instance.
Args:
file_path(str): The path to the dataset file.
"""
super().__init__()
self.file_path = file_path
def __enter__(self):
self.open()
return self
def __exit__(self, exc_type, exc_val, exc_tb):
self.close()
def __del__(self):
self.close()
@abc.abstractmethod
def get_subject_entries(self) -> list:
"""Get the dataset entries holding the subject's data.
Returns:
list: The list of subject entry strings.
"""
pass
@abc.abstractmethod
def get_shape(self, entry: str) -> list:
"""Get the shape from an entry.
Args:
entry(str): The dataset entry.
Returns:
list: The shape of each dimension.
"""
pass
@abc.abstractmethod
def get_subjects(self) -> list:
"""Get the subject names in the dataset.
Returns:
list: The list of subject names.
"""
pass
@abc.abstractmethod
def read(self, entry: str, index: expr.IndexExpression=None):
"""Read a dataset entry.
Args:
entry(str): The dataset entry.
index(expr.IndexExpression): The slicing expression.
Returns:
The read data.
"""
pass
@abc.abstractmethod
def has(self, entry: str) -> bool:
"""Check whether a dataset entry exists.
Args:
entry(str): The dataset entry.
Returns:
bool: Whether the entry exists.
"""
pass
@abc.abstractmethod
def open(self):
"""Open the reader."""
pass
@abc.abstractmethod
def close(self):
"""Close the reader."""
pass
class Hdf5Reader(Reader):
"""Represents the dataset reader for HDF5 files."""
def __init__(self, file_path: str, category='images') -> None:
"""Initializes a new instance.
Args:
file_path(str): The path to the dataset file.
category(str): The category of an entry that contains data of all subjects
"""
super().__init__(file_path)
self.h5 = None # type: h5py.File
self.category = category
def get_subject_entries(self) -> list:
group = df.DATA_PLACEHOLDER.format(self.category)
return ['{}/{}'.format(group, k) for k in sorted(self.h5[group].keys())]
def get_shape(self, entry: str) -> list:
return self.h5[entry].shape
def get_subjects(self) -> list:
return self.read(df.SUBJECT)
def read(self, entry: str, index: expr.IndexExpression=None):
if index is None:
data = self.h5[entry][()] # need () instead of util.IndexExpression(None) [which is equal to slice(None)]
else:
data = self.h5[entry][index.expression]
if isinstance(data, np.ndarray) and data.dtype == np.object:
return data.tolist()
# if h5py.check_dtype(vlen=self.h5[entry].dtype) == str and not isinstance(data, str):
# return data.tolist()
return data
def has(self, entry: str) -> bool:
return entry in self.h5
def open(self):
self.h5 = h5py.File(self.file_path, mode='r', libver='latest')
def close(self):
if self.h5 is not None:
self.h5.close()
self.h5 = None
def get_reader(file_path: str, direct_open: bool=False) -> Reader:
""" Get the dataset reader corresponding to the file extension.
Args:
file_path(str): The path to the dataset file.
direct_open(bool): Whether the file should directly be opened.
Returns:
Reader: Reader corresponding to dataset file extension.
"""
extension = os.path.splitext(file_path)[1]
if extension not in reader_registry:
raise ValueError('unknown dataset file extension "{}"'.format(extension))
reader = reader_registry[extension](file_path)
if direct_open:
reader.open()
return reader
reader_registry = {'.h5': Hdf5Reader, '.hdf5': Hdf5Reader}
|
Interscholastic extra-curricular activities means a pupil activity program that a school or school district sponsors or participates in and that includes participants from more than one school or school district. Interscholastic extra-curricular activity does not include any activity included in the school district’s graded course of study.
Any student with a total grade point average of less than 2.0 on the weighted scale but higher than a 1.0 will be placed on “academic watch.” Any student on academic watch must have his/her teachers complete a weekly grade and effort report during that season and submit it to the head coach. They must also participate in the extra-curricular intervention/study table program.
A student enrolling in the seventh grade for the first time will be eligible for the first grading period regardless of previous academic achievement. Thereafter, in order to be eligible, a student in grades 7 or 8 must be currently enrolled in school the immediate grading period and received passing grades during that grading period in a minimum of five of those subjects in which the student received grades.
A student enrolled in the first grading period after advancement from the eighth grade must have passed a minimum of 5 of all subjects carried the preceding grading period in which the student was enrolled.
Athletes may not try out or practice without a completed blue card, code of conduct, physical card, Emergency Medical Form, or eligibility bulletin. A district home web page release form is also needed to permit names of athletes and pictures online. Forms are to be collected and submitted as a package. Coaches are required to check all forms for completeness. Incomplete forms should be returned to the student athlete, and the student athlete will not be allowed to practice or compete until corrected. Athletes who have previously participated in a sport during the current school year will have a blue card on file. Forms are to be completed once per year.
Coaches are to provide the Athletic Director up to date rosters as soon as possible. It is the head coach’s responsibility to review sport specific eligibility with athletes and their parents, and to review the eligibility certificates. The Coach must initial all blue cards and eligibility forms before an athlete may compete. This must be accomplished prior to the first contest. Athletes may not compete until this process is complete.
Students of the Northwest Local School District participate in athletics under the regulations of the Ohio High School Athletic Association, the Greater Miami Conference, the Southwest Ohio Conference, and the Northwest Board of Education.
The student must be present for the entire class period for all classes on the day of the performance or game to participate in the event. If a student is not in attendance for all classes on the school day of the event, he/she may participate only with the approval of the athletic director and/or building principal. In the case of a non-school day event, the student must have been present for all classes on the preceding school day. If the student was absent on the preceding school day, the student can participate in a non-school day event only with the approval of the athletic director and/or building principal.
|
"""Module for supporting searching/filtering of department employee lists.
In a logical sense this module implements:
select <fixed_fields> from <model> where <filter_conditions>
where:
- fixed_fields is the fixed list of fields mentioned above
- model is automatically determined by the module
- filter_conditions is provided by the user (and needs to be parsed by the
query string parser).
For simplicity ATM we only support a simple AND of the fields (first_name=foo
AND department=bar). In future a more elaborate query language can be
implemented.
Note that the only fields which can be queried are:
first_name
last_name
gender
job_title
hire_date
it is only the filter conditions that change.
If the field being filtered is a string type then we treat the value as a
substring to match against else we try to match the exact value."""
import datetime
import logging
import sqlalchemy as sqla
from models.employee import Employee
from models.dept_employee import DeptEmployee
from models.title import Title
import exceptions
LOG = logging.getLogger(__name__)
class FilterExpr(object):
"""Query string parsers must ultimately return a filter expression object."""
def to_sqla(self):
"""Return the SQLAlchemy object corresponding to this expression.
Subclasses must override this"""
return None
class EqualExpr(FilterExpr):
"""Match a value exactly."""
def __init__(self, field, val):
self.field = field
self.val = val
def to_sqla(self):
LOG.debug("converting: {} = {}".format(self.field, self.val))
return (self.field == self.val)
class LikeExpr(FilterExpr):
"""Match a pattern (case insensitive)."""
def __init__(self, field, pattern):
self.field = field
self.pattern = pattern
def to_sqla(self):
LOG.debug("converting: {} ILIKE {}".format(self.field, self.pattern))
return self.field.ilike(self.pattern)
class NotExpr(FilterExpr):
"""Negate an expression."""
sqla_operator = sqla.not_
def __init__(self, expr):
self.expr = expr
def to_sqla(self):
LOG.debug("converting: NOT({})".format(self.expr))
return sqla.not_(self.expr.to_sqla())
class AndExpr(FilterExpr):
def __init__(self, *exprs):
self.exprs = exprs
def __init__(self, *exprs):
self.exprs = exprs
def to_sqla(self):
sqla_exprs = [expr_obj.to_sqla() for expr_obj in self.exprs]
LOG.debug("converting: AND({})".format(sqla_exprs))
return sqla.and_(*sqla_exprs)
class OrExpr(FilterExpr):
def __init__(self, *exprs):
self.exprs = exprs
def __init__(self, *exprs):
self.exprs = exprs
def to_sqla(self):
sqla_exprs = [expr_obj.to_sqla() for expr_obj in self.exprs]
LOG.debug("converting: OR({})".format(sqla_exprs))
return sqla.or_(*sqla_exprs)
class QueryParser(object):
"""Base class for parsing query strings.
This class should be used in the following manner:
1. Global config instantiates a QueryParser (sub)class instance during
startup.
2. Caller calls gets parser instance from request.registry.settings
3. Caller calls instance.parse(<query args>)
4. Caller calls instance.search(session, dept_no) where session is a DB
session instance.
5. Search view passes on the result to the template."""
valid_fields = {
"first_name": Employee.first_name,
"last_name": Employee.last_name,
"gender": Employee.gender,
"hire_date": Employee.hire_date,
"curr_title": Title.title,
}
def __init__(self):
self.expr = None
def parse(self):
"""Parse the query inputs and set 'expr' to the corresponding
FilterExpr object.
If the query input was parsed successfully then return True else False
Subclasses must override this. In particular, the arguments to the
parse method must be defined by each subclass. Subclasses should set
the 'expr' attribute to the appropriate FilterExpr instance
representing the parsed query"""
return False
def search(self, session, dept_no):
"""Perform the actual search.
'session' is the SQLAlchemy session object.
'dept_no' is the department number to which the search should be
limited.
This method returns the query object. The caller can make further
modifications to the query (e.g. add limit and offset)
Subclasses should not need to override this"""
# always implicitly add dept_no and filters to select current title
today = datetime.date.today()
title_is_curr = sqla.or_(Title.to_date == None, Title.to_date >= today)
return session.query(Employee.emp_no,
Employee.first_name,
Employee.last_name,
Employee.gender,
Employee.hire_date,
Title.title.label('curr_title')).\
filter(DeptEmployee.dept_no == dept_no).\
filter(DeptEmployee.emp_no == Employee.emp_no).\
filter(DeptEmployee.to_date >= today).\
filter(Title.emp_no == Employee.emp_no).\
filter(title_is_curr).\
filter(self.expr.to_sqla())
class FormQueryParser(QueryParser):
"""A simple query parser.
All the fields are ANDed. If a field is of string type then a substring
match is performed else an exact match is performed."""
def parse(self, **kwargs):
"""Build a filter expression out of the arguments
kwargs contains the fields to be queried (e.g. {"first_name": "foo"})."""
if not kwargs:
self.expr = None
return self
expr_list = []
for field, value in kwargs.items():
try:
field_obj = self.valid_fields[field]
except KeyError:
raise exceptions.UnknownField(field=field)
pat_types = (sqla.String, sqla.CHAR, sqla.VARCHAR)
if isinstance(field_obj.type, pat_types):
expr = LikeExpr(field_obj, '%{}%'.format(value))
else:
expr = EqualExpr(field_obj, value)
expr_list.append(expr)
self.expr = AndExpr(*expr_list) if len(expr_list) > 1 else expr_list[0]
return self
|
Over 200 people joined Variety Tasmania in the beautiful surrounds of the Royal Tasmanian Botanical Gardens on the 22nd February 2019 for an exquisite, progressive gin tasting experience, whilst supporting Tasmanian children in need.
The gin cocktails from Spring Bay Distillery, McHenry Distillery and Forty Spotted Gin proved so popular, many of our guests went back for seconds.
Shake over ice and strain into chilled martini glass, garnish with Tasmanian blueberries or other seasonal fruit.
After watching a traditional Holy Lion Dance, representing bravery and righteousness as well as amiability, performed by the Tasmanian Chinese Buddhist Academy of Australia, our guests tasted a delicious McHenry Distillery Sloe Gin cocktail, made with Sloe berries foraged from the hedge rows around Tasmania. The cocktail for the evening was a Sloe Royal – 30ml of McHenry Sloe Gin and top up with dry Tassie sparkling wine. Simple but delicious, and perfect for warm summer evenings!
Our final gin cocktail for the evening was a pink summer cocktail by Forty Spotted Gin. This is a gin for the inquisitive and creative at heart, designed to excite the curious drinker. Clean, fresh and excitingly complex, Forty Spotted uses local pepperberry as a key botanical.
Stir down Forty Spotted, lime juice and simple syrup on ice.
|
# -*- coding=utf-8 -*-
from __future__ import absolute_import, print_function, unicode_literals
import resolvelib
from .traces import trace_graph
def print_title(text):
print('\n{:=^84}\n'.format(text))
def print_requirement(r, end='\n'):
print('{:>40}'.format(r.as_line(include_hashes=False)), end=end)
def print_dependency(state, key):
print_requirement(state.mapping[key], end='')
parents = sorted(
state.graph.iter_parents(key),
key=lambda n: (-1, '') if n is None else (ord(n[0].lower()), n),
)
for i, p in enumerate(parents):
if p is None:
line = '(user)'
else:
line = state.mapping[p].as_line(include_hashes=False)
if i == 0:
padding = ' <= '
else:
padding = ' ' * 44
print('{pad}{line}'.format(pad=padding, line=line))
class StdOutReporter(resolvelib.BaseReporter):
"""Simple reporter that prints things to stdout.
"""
def __init__(self, requirements):
super(StdOutReporter, self).__init__()
self.requirements = requirements
def starting(self):
self._prev = None
print_title(' User requirements ')
for r in self.requirements:
print_requirement(r)
def ending_round(self, index, state):
print_title(' Round {} '.format(index))
mapping = state.mapping
if self._prev is None:
difference = set(mapping.keys())
changed = set()
else:
difference = set(mapping.keys()) - set(self._prev.keys())
changed = set(
k for k, v in mapping.items()
if k in self._prev and self._prev[k] != v
)
self._prev = mapping
if difference:
print('New pins: ')
for k in difference:
print_dependency(state, k)
print()
if changed:
print('Changed pins:')
for k in changed:
print_dependency(state, k)
print()
def ending(self, state):
print_title(" STABLE PINS ")
path_lists = trace_graph(state.graph)
for k in sorted(state.mapping):
print(state.mapping[k].as_line(include_hashes=False))
paths = path_lists[k]
for path in paths:
if path == [None]:
print(' User requirement')
continue
print(' ', end='')
for v in reversed(path[1:]):
line = state.mapping[v].as_line(include_hashes=False)
print(' <=', line, end='')
print()
print()
|
The Gulf oil spill was 2010’s biggest story, so when David Barstow walked into a Houston hotel for last December’s hearings on the disaster, he wasn’t surprised to see that the conference room was packed. Calling the hearing to order, Coast Guard Captain Hung Nguyen cautioned the throng, “We will continue to allow full media coverage as long as it does not interfere with the rights of the parties to a fair hearing and does not unduly distract from the solemnity, decorum, and dignity of the proceedings.” It’s a stock warning that every judge gives before an important trial, intended to protect witnesses from a hounding press. But Nguyen might have been worrying too much. Because as Barstow realized as he glanced across the crowd, most of the people busily scribbling notes in the room were not there to ask questions. They were there to answer them.
“The muscles of journalism are weakening and the muscles of public relations are bulking up — as if they were on steroids,” he says.
In their recent book, “The Death and Life of American Journalism,” Robert McChesney and John Nichols tracked the number of people working in journalism since 1980 and compared it to the numbers for public relations. Using data from the U.S. Bureau of Labor Statistics, they found that the number of journalists has fallen drastically while public relations people have multiplied at an even faster rate. In 1980, there were about .45 PR workers per 100,000 population compared with .36 journalists. In 2008, there were .90 PR people per 100,000 compared to .25 journalists. That’s a ratio of more than three-to-one, better equipped, better financed.
The researcher who worked with McChesney and Nichols, R. Jamil Jonna, used census data to track revenues at public relations agencies between 1997 and 2007. He found that revenues went from $3.5 billion to $8.75 billion. Over the same period, paid employees at the agencies went from 38,735 to 50,499, a healthy 30 percent growth in jobs. And those figures include only independent public relations agencies — they don’t include PR people who work for big companies, lobbying outfits, advertising agencies, non-profits, or government.
Traditional journalism, of course, has been headed in the opposite direction. The Newspaper Association of America reported that newspaper advertising revenue dropped from an all-time high of $49 billion in 2000 to $22 billion in 2009. That’s right — more than half. A lot of that loss is due to the recession. But even the most upbeat news executive has to admit that many of those dollars are not coming back soon. Six major newspaper companies have sought bankruptcy protection in recent years.
Less money means fewer reporters and editors. The American Society of News Editors found the number of newspaper reporters and editors hit a high of 56,900 in 1990. By 2011, the numbers had dropped to 41,600. Much of that loss has occurred since 2007. Network news did not fare any better — the Pew Research Center’s Project for Excellence in Journalism estimates that employment there is less than half of what it was in the peak period of the 1980s.
“I don’t know anyone who can look at that calculus and see a very good outcome,” said McChesney, a communications professor at the University of Illinois.
Michael Schudson, a journalism professor at Columbia University, CJR contributor, and author of “Discovering the News,” said modern public relations started when Ivy Lee, a minister’s son and a former reporter at the New York World, tipped reporters to an accident on the Pennsylvania Railroad. Before then, railroads had done everything they could to cover up accidents. But Lee figured that crashes, which tend to leave visible wreckage, were hard to hide. So it was better to get out in front of the inevitable story.
The press release was born. Schudson said the rise of the “publicity agent” created deep concern among the nation’s leaders, who distrusted a middleman inserting itself and shaping messages between government and the public. Congress was so concerned that it attached amendments to bills in 1908 and 1913 that said no money could be appropriated for preparing newspaper articles or hiring publicity agents.
People “became more conscious that they were not getting direct access, that it was being screened for them by somebody else,” Schudson said.
But there was no turning back. PR had become a fixture of public life. Concern about the invisible filter of public relations became a steady drumbeat in the press. From the classic 1971 CBS documentary, “The Selling of the Pentagon,” warning that the military was using public relations tricks to sell a bigger defense budget, to reports that PR wizards had ginned up testimony about horrors in Kuwait before the first Gulf War, the theme was that spin doctors were pulling the strings.
“If I burn you, I am out of business,” said McCormick, whose organization has a membership of 21,000. He concedes that can be a tough message to relay to a client facing bad press. “The problem is when you get caught up with a client, and the business drives you to tell a message differently than you would advise,” McCormick said.
So what has changed? Isn’t this article yet another in a long line of complaints, starting with Silas Bent’s counting of stories generated by publicity agents in one day’s issue of The New York Times in 1926 (174) or Peter Odegard’s 1930 lament that “reporters today are little more than intellectual mendicants who go from one publicity agent or press bureau to another seeking ‘handouts'”? It is, in a way. But the context has changed. Journalism, the counterweight to corporate and government PR, is shrinking.
The Pew Center took a look at the impact of these changes last year in a study of the Baltimore news market. The report, “How News Happens,” found that while new online outlets had increased the demand for news, the number of original stories spread out among those outlets had declined. In one example, Pew found that area newspapers wrote one-third the number of stories about state budget cuts as they did the last time the state made similar cuts in 1991. In 2009, Pew said, The Baltimore Sun produced 32 percent fewer stories than it did in 1999.
Moreover, even original reporting often bore the fingerprints of government and private public relations. Mark Jurkowitz, associate director the Pew Center, said the Baltimore report concentrated on six major story lines: state budget cuts, shootings of police officers, the University of Maryland’s efforts to develop a vaccine, the auction of the Senator Theater, the installation of listening devices on public busses, and developments in juvenile justice. It found that 63 percent of the news about those subjects was generated by the government, 23 percent came from interest groups or public relations, and 14 percent started with reporters.
Of course, in the modern world, news does not stay in one place for long. Stories may begin on a newspaper blog or a TV website, but they soon ripple across the Internet like a splash in a pond. Tom Rosenstiel, Pew’s director, said that ripple effect makes the original story that hits the web — and the source of information it is based on — even more important.
Some experts have argued that in the digital age, new forms of reporting will eventually fill the void left by traditional newsrooms. But few would argue that such a point has arrived, or is close to arriving. “There is the overwhelming sense that the void that is created by the collapse of traditional journalism is not being filled by new media, but by public relations,” said John Nichols, a Nation correspondent and McChesney’s co-author. Nichols said reporters usually make some calls and check facts. But the ability of government or private public relations to generate stories grows as reporters have less time to seek out stories on their own. That gives outside groups more power to set the agenda.
Some quick examples: in the academic world, the website Futurity regularly offers polished stories from research universities across the country like “Gems Clear Drug Resistance Hurdle” (Northwestern University) and “Algae Spew Mucus to Alter Sea Ice” (University of Washington); on the business front, Toyota used satellite press conferences and video feeds on its website to respond to allegations about sudden acceleration in its cars last year, and published transcripts on its website of a long interview with reporters at the Los Angeles Times; and in the realm of political advocacy, Media Matters for America led a battle across the Internet for the past several months with the anti-abortion group Live Action over a videotaped sting that Live Action did on Planned Parenthood.
It’s also getting tougher to know when a storyline originates with a self-interested party producing its own story. In 2005 and 2006, the New York Times and the advocacy group PR Watch did separate reports detailing how television news was airing video news releases prepared by corporate or government PR offices, working them into stories as part of their newscasts. PR Watch listed 77 stations which aired the reports, some of them broadcast nearly verbatim.
Stacey Woelfel, the past-chairman of the Radio Television Digital News Association, said when his group looked into the issue after it was raised by the reports, it was troubled by how widespread the use of the releases had become. “Some stations were running video news releases all the time, sometimes packages from corporate interests,” he said.
There is evidence that it has not stopped. James Rainey, the Los Angeles Times media columnist, recently won Penn State’s Bart Richards Award for Media Criticism for columns last year that showed how local television stations were running paid content in their news programs. “There’s a good chance that your small screen expert has taken cash to sell, sell, sell,” Rainey wrote in a Sept. 15 column.
In 2008, the New York Times again returned to the issue of hidden public relations agendas with a series of stories in which Barstow showed how the Pentagon was using retired military officers to deliver the military’s message on the war in Iraq and its counterterrorism efforts. Barstow described how the officers were presented on the news programs as independent consultants offering unvarnished opinions.
“You never know what you don’t know — it is getting harder and harder to find out who is behind those front groups,” she said. That is no accident, according to Wendell Potter, a former vice president for corporate communications at CIGNA, the insurance company.
The industry’s opposition to the bill reflected the public’s concern at the time about government interference in health care, Potter said. But by 2007, public opinion had changed and polls showed that a majority of Americans felt that some degree of government involvement was needed.
“You really want someone that seems to be an ordinary person. That gives you credibility and the perception that the public is on your side,” he said.
The health-insurance industry’s trade group, America’s Health Insurance Plans or AHIP, declined to speak for this story. But executives with the public relations firm APCO Worldwide, which has worked for the health-care industry, said that when their agency sets up a group to fight for an issue, they don’t try to hide their association. B. Jay Cooper, APCO’s managing director, said in the recent health-care fight APCO managed such a group, but every reporter who covered the issue knew who APCO represented. That doesn’t mean the link was always reported to the public.
The problem for Armstrong was that neither organization’s filings proved a link. There was no definite proof that it was the same money. The IRS forms filed by the groups are pretty scanty — they require organizations to list donations but not the donor — and Armstrong had to work with sources to confirm the connection.
It took a while for Armstrong to establish the link, but he did so in a Nov. 17, 2010, story. Neither group would confirm that it was the same money — the Chamber still won’t — but no one called for a correction.
Bill Vickery, who Bloomberg said was paid by the Chamber to help run the opposition in Arkansas, told Armstrong that he organized about 50 events targeting incumbent Sen. Blanche Lincoln, a Democrat who was a key supporter of the health-care law. Lincoln lost by 21 percent in last November’s midterm elections.
But Patterson knew early on that the heath-care fight was likely to be the defining issue of the Senate race, and many of the ads were already targeting Lincoln’s position in favor of change to the health-care system. So he asked the campaign’s ad buyer to track the spending. They found $6 million in issue advertising was spent during the period — a very large amount in a small media market state.
From October to early December, Lincoln’s buyer found that the U.S. Chamber of Commerce spent $2 million in advertising. Americans for Stable Healthcare — a coalition of liberal groups, the pharmaceutical industry, and unions in favor of the plan — spent $1.2 million. And the 60 Plus Association, a conservative senior citizen group opposed to the plan, spent $650,000.
One of the largest is the Chamber’s $100 million “Campaign for Free Enterprise,” an effort to fight government involvement in business matters. Besides the traditional effort of advertising, press releases, and position papers, the Chamber has set up groups like Students in Free Enterprise and the Extreme Entrepreneurship Tour to target college campuses.
It’s also making an online push. The Chamber kicked off part of the campaign with $100,000 in prize money for a video contest on its Facebook page. The campaign received 100,000 views, recorded 10,000 votes, and collected 4,000 email addresses to add to the Chamber’s database. Right now, it has 146,000 fans — not Lady Gaga level (more than 30 million at press time) but not bad for a business group.
This story has been co-published by ProPublica and the Columbia Journalism Review.
|
# vim: set et ts=4 sw=4 fileencoding=utf-8:
'''
Django settings for test project
'''
import os
import dj_config_url
from getenv import env
PROJ_ROOT = os.path.dirname(__file__)
SECRET_KEY = 'not_so_secret'
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': os.path.join(PROJ_ROOT, 'test.db'),
},
}
INSTALLED_APPS = (
'test_app',
)
TEST_RUNNER = 'django_nose.NoseTestSuiteRunner'
CACHE_URI = env('TEST_CACHE_URI', 'memcache://127.0.0.1:11211')
CUSTOM_CACHE_BACKEND = env('TEST_CACHE_BACKEND')
CACHES = {
'default': dj_config_url.parse(CACHE_URI)
}
if CUSTOM_CACHE_BACKEND:
CACHES['default']['BACKEND'] = CUSTOM_CACHE_BACKEND
LANGUAGE_CODE = 'en-us'
TIME_ZONE = 'UTC'
USE_I18N = False
USE_L10N = False
USE_TZ = False
DEBUG = True
LOGGING = {
'version': 1,
'disable_existing_loggers': True,
'formatters': {
'simple': {
'format': "%(levelname)s [%(name)s:%(lineno)s] %(message)s",
},
},
'handlers': {
'console': {
'level': 'DEBUG',
'class': 'logging.StreamHandler',
'formatter': 'simple'
},
},
'root': {
'level': 'DEBUG',
'handlers': ['console'],
},
'loggers': {
'django': {
'handlers': ['console'],
'propagate': False,
'level': 'ERROR',
},
'factory': {
'handlers': ['console'],
'propagate': False,
'level': 'ERROR',
},
'natural_key_cache': {
'handlers': ['console'],
'propagate': False,
'level': 'ERROR',
},
},
}
MIDDLEWARE_CLASSES = ()
|
The AIA Memphis Gallery at 511 South Main exhibits architectural and design work of importance to Memphis and throughout the region.
Join us this Friday for an in-depth look at the 2015 Design Awards winners, including work from archimania, brg3s architects, Belz Architecture, Looney Ricks Kiss, and Self Tucker Architects.
Visit the new AIA Memphis website for more information on the Design Awards winners and all photos from the evening's festivities!
|
# -*- coding:utf-8 -*-
__author__ = 'Administrator'
# import urllib.request
# path = "D:\\Download"
# url = "http://img.picuphost.com/img/upload/image/20151130/113000016301.jpeg"
# name ="D:\\download\\2.jpeg"
# #保存文件时候注意类型要匹配,如要保存的图片为jpg,则打开的文件的名称必须是jpg格式,否则会产生无效图片
# conn = urllib.request.urlopen(url)
# f = open(name,'wb')
# f.write(conn.read())
# f.close()
# print('Pic Saved!')
import whyspider
# 初始化爬虫对象
my_spider = whyspider.WhySpider()
# # 模拟GET操作
# path="G:\PostgraduatePROJECT\Caoliu-master"
# fname='22.jpeg'
# path2 = path+'\\'+fname
# name='G:\\PostgraduatePROJECT\\Caoliu-master\\down\\22.jpeg'
# f = open(name,'wb')
# data= my_spider.send_get('http://img.picuphost.com/img/upload/image/20151130/113000016301.jpeg')
# f.write(data)
# f.close()
# # 模拟POST操作
# print my_spider.send_post('http://3.apitool.sinaapp.com/','why=PostString2333')
#
# # 模拟GET操作
# print my_spider.send_get('http://www.baidu.com/')
#
# # 切换到手机模式
#my_spider.set_mobile()
#
# # 模拟GET操作
# print my_spider.send_get('http://www.baidu.com/')
# import time
# time1= time.time()
#
# time2= time1+3
# print(time2-time1)
import urllib2
import whyspider
request = urllib2.Request('http://ipoock.com/img/g4/201512242250036siyu.jpeg')
request.add_header('User-Agent', 'fake-client')
#response = urllib2.urlopen(request,timeout=10)
response = urllib2.urlopen('http://ipoock.com/img/g4/201512242250036siyu.jpeg', timeout=10)
print(response)
f=open('J:\\caoliu\\ff.jpeg','wb')
f.write(response)
f.close()
|
All asce 7 10 Wind Load Spreadsheet templates can be downloaded for private use and no charge. We believe that they will be valuable to you! The asce 7 10 Wind Load Spreadsheet featured below also run with OpenOffice and Google Spreadsheets, so if you don’t have a version of Microsoft Excel, the only thing stopping you from doing a budget is the time to download and the decision to get your finances following control.
, we all selects the very best selections with ideal image resolution just for you all, and now this photographs is usually one of photographs choices in this ideal photographs gallery regarding asce 7 10 Wind Load Spreadsheet. I’m hoping you’ll think it’s great.
posted through admin with 2019-01-25 10:50:01. To view all photographs inside asce 7 10 Wind Load Spreadsheet photos gallery you need to follow this hyperlink.
|
#!/usr/bin/python2
# Bootdreams python
# Written by Joe Balough (sallopedllama at gmail.com)
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
version = 0.3
print ("Bootdreams dot py Version " + str(version))
do_burn = True
# Import relevant modules
import sys
# For running commands and getting their output to stdout
import subprocess
# For string.lower
import string
# To determine if file exits
import os
# For rmtree
import shutil
# Regular expressions
import re
# Query wodim for burners
# Oddly enough, wodim returns an error code if you have a burner but returns 0 if you don't.
def query_burners():
try:
output = subprocess.Popen(['wodim', '--devices'], stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate()[0]
return re.findall("dev='(\S*)'", output)
except subprocess.CalledProcessError, (exception):
return re.findall("dev='(\S*)'", exception.output)
# Help printing function
def print_help():
print ("Usage: " + sys.argv[0] + " Image_File [Write Speed] [/path/to/burner]")
print ("Acceptable image formats are Discjuggler (CDI), ISO, and BIN/CUE.")
print ("Write speed and burner path are optional. If omitted, lowest speed and the burner at " + drive_path + " is used.")
print ("All burner paths can be found by running 'wodim --devices'.")
# Asks user a yes / no question and quits if the user says no. Default question formatted to fit below a "WARNING: ... " string
def ask_for_continue(question = " Would you like to continue (Y/n)? "):
to_continue = string.lower(raw_input(question))
if to_continue != "" and to_continue[0] == 'n':
exit(1)
# Drive index
try:
drive_path = sys.argv[3]
except IndexError:
try:
drive_path = query_burners()[0]
except IndexError:
print ("Warning: No burner in system. A burner is obviously required.")
exit(1)
# The file to process
try:
input_image = sys.argv[1]
except IndexError:
print ("ERROR: No File Specified.")
print_help()
sys.exit(1)
# Burn Speed
try:
burn_speed = sys.argv[2]
except IndexError:
burn_speed = 0
# See if user was trying to get help
if string.lower(input_image) == "help" or string.lower(input_image) == "--help" or string.lower(input_image) == "-h":
print_help()
sys.exit(1)
# Make sure file exists
if not os.path.isfile(input_image):
print ("ERROR: File not found.")
print_help()
sys.exit(1)
# Convert extension to lower case to properly handle it
input_ext = string.lower(input_image[-3:])
# CDI AND NRG FILE HANDLING
if input_ext == "cdi" or input_ext == "nrg":
# Set some CDI / NRG specific options here
# Default for discjuggler
image_type = "DiscJuggler"
image_info_call = ["cdirip", input_image, "-info"]
# Special case for nero
if input_ext == "nrg":
image_type = "Nero"
image_info_call = ["nerorip", "-i", input_image]
# Print some helpful information
print ("Going to burn " + image_type + " image " + input_image + " at " + str(burn_speed) + "x on burner at " + drive_path)
# Get information about this image file
image_info = subprocess.Popen(image_info_call, stdout=subprocess.PIPE).communicate()[0]
# Make a list containing lists of track types for each session.
# First dimension is Session number, second is Track number
session_data = []
print ("Getting Session and Track information")
# Split the image_info string by the Session i has d track(s) string. Discard the first because it offers no data
for i in re.split('Session \d+ has \d+ track\(s\)', image_info)[1:]:
# Get all the track types in a list and append it to the list of session data
session_data.append(re.findall('Type: (\S*)', i))
# Check for situations to warn the user about:
# More than 2 sessions:
if len(session_data) > 2:
print ("Warning: Image has more than 2 sessions. Continuing anyway though this is untested.")
# Unsupported session type
for s in session_data:
for t in s:
if not t in ["Mode1/2048", "Mode2/2336", "Mode2/2352", "Audio/2352"]:
print ("ERROR: Unsupported session type " + t + ". Only Mode1/2048, Mode2/2336, Mode2/2352, and Audio/2352 are supported.")
exit(1)
# data/data image with CDDA
if session_data[0] == ["Mode2/2336", "Audio/2352"]:
print ("Warning: CDRecord cannot properly burn a data/data DiscJuggler image with CDDA.")
print (" You can continue anyway though it may be a coaster if there is very little space left in the image.")
ask_for_continue()
# Delete the temp dir if it already exists and create it again
print ("Clearing Temp Directory")
if os.path.isdir('/tmp/bootdreams'):
shutil.rmtree('/tmp/bootdreams', True)
os.mkdir('/tmp/bootdreams')
# Rip the Image
print ("Ripping " + input_ext + " image")
print ("")
# The last version (which did not fail to burn any images for me) did this bit wrong and only -iso was ever passed to cdirip.
# It never got the -cut and -cutall options which together don't work the way the readme says they should.
# Just going to make it not -cutall and fix it if a user tells me they had a bad burn that would have been fixed by it
rip_options = []
if input_ext == "cdi":
rip_options = ["cdirip", input_image, "/tmp/bootdreams", "-iso"]
if session_data[0][0] != "Audio/2352":
rip_options += ["-cut"]
else:
rip_options += ["-full"]
else:
rip_options = ["nerorip"]
if session_data[0][0] != "Audio/2352":
rip_options += ["--trim"]
else:
rip_options += ["--full"]
rip_options += [input_image, "/tmp/bootdreams"]
if subprocess.call(rip_options) != 0:
print ("ERROR: " + input_ext + "rip failed to extract image data. Please check its output for more information.")
exit(1)
# Burn the CD
if do_burn:
print ("Burning CD")
print ("")
index = 1
for s in session_data:
cdrecord_opts = []
for t in s:
if t == "Mode1/2048":
cdrecord_opts += ["-data", "/tmp/bootdreams/tdata" + str(index).zfill(2) + ".iso"]
elif t == "Mode2/2336" or t == "Mode2/2352":
cdrecord_opts += ["-xa", "/tmp/bootdreams/tdata" + str(index).zfill(2) + ".iso"]
elif t == "Audio/2352":
cdrecord_opts += ["-audio", "/tmp/bootdreams/taudio" + str(index).zfill(2) + ".wav"]
index += 1
# Build options list for cdrecord
cdrecord_call = ["cdrecord", "-dev=" + str(drive_path), "gracetime=2", "-v", "driveropts=burnfree", "speed=" + str(burn_speed)]
if index == len(session_data) + 1:
cdrecord_call.append("-eject")
else:
cdrecord_call.append("-multi")
if "-xa" in cdrecord_opts or "-data" in cdrecord_opts:
cdrecord_call.append("-tao")
else:
cdrecord_call.append("-dao")
cdrecord_call += cdrecord_opts
if not do_burn:
print(cdrecord_call)
# Burn the session
if do_burn and subprocess.call(cdrecord_call) != 0:
print ("ERROR: CDRecord failed. Please check its output for mroe information.")
exit(1)
if do_burn:
print ("Image burn complete.")
elif input_ext == "iso":
# TODO: Isos have checkbox for multisesion and menu option for record mode: mode1 or mode 2 form 1
cdrecord_call = ['cdrecord', 'dev=' + str(drive_path), 'gracetime=2', '-v', 'driveropts=burnfree', 'speed=' + str(burn_speed), '-eject', '-tao']
if iso_multi == True:
cdrecord_call += ['-multi']
if iso_mode1 == True:
cdrecord_call += ['-data']
else:
cdrecord_call += ['-xa']
cdrecord_call += [input_image]
|
There are now many hundreds of hypnotherapists who offer what is called "gut-directed hypnotherapy" for IBS, which takes the general techniques of hypnotherapy and applies them directly to the abdominal pain and digestive symptoms which IBS sufferers struggle with. This type of hypnotherapy has been clinically tested and found to be very helpful to many IBS patients.
While the patients is in this state the therapist will talk to you and make positive suggestions - one typical method for IBS is to ask you to place your hand over your abdomen and imagine that a healing warmth is flowing from your hand to your stomach.
If you don't have the time or the resources to visit a qualified hypnotherapist, you might find some relief in one of the self-hypnosis CDs available, which can be listened to in the comfort of your own home.
Sophie Lee has suffered from IBS for more than 15 years. She runs IBS Tales http://www.ibstales.com where you can read hundreds of personal stories of IBS and self-help tips.
New antidepressant development, until fairly recently, was at best a random, and at worst a problematic and frustrating, process. Often, medications that were developed for one thing were discovered quite accidentally to have much more important therapeutic effects for a completely different disorder.
|
#! /usr/bin/env python
# -*- coding: UTF-8 -*-
#----------------------------------------------------------------------------------------------------------------------*
import sys, time, os
import makefile, default_build_options
#----------------------------------------------------------------------------------------------------------------------*
# displayDurationFromStartTime
#----------------------------------------------------------------------------------------------------------------------*
def displayDurationFromStartTime (startTime) :
totalDurationInSeconds = int (time.time () - startTime)
durationInSecondes = totalDurationInSeconds % 60
durationInMinutes = (totalDurationInSeconds // 60) % 60
durationInHours = totalDurationInSeconds // 3600
s = ""
if durationInHours > 0:
s += str (durationInHours) + "h"
if durationInMinutes > 0:
s += str (durationInMinutes) + "min"
s += str (durationInSecondes) + "s"
print ("Done at +" + s)
#----------------------------------------------------------------------------------------------------------------------*
class GenericGalgasMakefile :
mJSONfilePath = ""
mDictionary = {}
mExecutable = ""
mGoal = ""
mMaxParallelJobs = 0
mDisplayCommands = False
mCompilerTool = []
mLinkerTool = []
mStripTool = []
mSudoTool = ""
mCompilationMessage = ""
mLinkingMessage = ""
mInstallationgMessage = ""
mStripMessage = ""
mAllCompilerOptions = []
mCompilerReleaseOptions = []
mCompilerDebugOptions = []
m_C_CompilerOptions = []
m_Cpp_CompilerOptions = []
m_ObjectiveC_CompilerOptions = []
m_ObjectiveCpp_CompilerOptions = []
mTargetName = ""
mLinkerOptions = []
mExecutableSuffix = ""
mCrossCompilation = ""
def run (self) :
startTime = time.time ()
#--- Source file list
SOURCES = self.mDictionary ["SOURCES"]
#--- LIBPM
LIBPM_DIRECTORY_PATH = self.mDictionary ["LIBPM_DIRECTORY_PATH"]
#--------------------------------------------------------------------------- System
if self.mCrossCompilation == "":
(SYSTEM_NAME, MODE_NAME, SYSTEM_RELEASE, SYSTEM_VERSION, MACHINE) = os.uname ()
if SYSTEM_NAME == "Darwin":
MACHINE = "Intel"
SYSTEM_MACHINE = SYSTEM_NAME + "-" + MACHINE
else:
SYSTEM_MACHINE = self.mCrossCompilation
#--- GMP
GMP_DIRECTORY_PATH = LIBPM_DIRECTORY_PATH + "/gmp"
#--- Source directory list
SOURCES_DIR = self.mDictionary ["SOURCES_DIR"]
#--------------------------------------------------------------------------- Include dirs
SOURCES_DIR.append (LIBPM_DIRECTORY_PATH + "/bdd")
SOURCES_DIR.append (LIBPM_DIRECTORY_PATH + "/command_line_interface")
SOURCES_DIR.append (LIBPM_DIRECTORY_PATH + "/files")
SOURCES_DIR.append (LIBPM_DIRECTORY_PATH + "/galgas")
SOURCES_DIR.append (LIBPM_DIRECTORY_PATH + "/galgas2")
SOURCES_DIR.append (LIBPM_DIRECTORY_PATH + "/gmp")
SOURCES_DIR.append (LIBPM_DIRECTORY_PATH + "/streams")
SOURCES_DIR.append (LIBPM_DIRECTORY_PATH + "/time")
SOURCES_DIR.append (LIBPM_DIRECTORY_PATH + "/strings")
SOURCES_DIR.append (LIBPM_DIRECTORY_PATH + "/utilities")
includeDirs = ["-I" + GMP_DIRECTORY_PATH]
for d in SOURCES_DIR:
includeDirs.append ("-I" + d)
#--- Make object
make = makefile.Make (self.mGoal, self.mMaxParallelJobs == 1) # Display command utility tool path if sequential build
#--------------------------------------------------------------------------- Add Compile rule for sources (release)
#--- Object file directory
objectDirectory = "../build/cli-objects/makefile-" + self.mTargetName + "-objects"
#---
objectFileList = []
for source in SOURCES:
objectFile = objectDirectory + "/" + source + ".o"
objectFileList.append (objectFile)
sourcePath = make.searchFileInDirectories (source, SOURCES_DIR)
if sourcePath != "" :
extension = os.path.splitext (source) [1]
rule = makefile.Rule ([objectFile], self.mCompilationMessage + ": " + source)
rule.deleteTargetDirectoryOnClean ()
rule.mDependences.append (sourcePath)
rule.enterSecondaryDependanceFile (objectFile + ".dep", make)
rule.mCommand += self.mCompilerTool
rule.mCommand += self.mCompilerReleaseOptions
rule.mCommand += self.mAllCompilerOptions
if extension == ".c":
rule.mCommand += self.m_C_CompilerOptions
elif extension == ".cpp":
rule.mCommand += self.m_Cpp_CompilerOptions
rule.mCommand += ["-c", sourcePath]
rule.mCommand += ["-o", objectFile]
rule.mCommand += includeDirs
rule.mCommand += ["-MD", "-MP", "-MF", objectFile + ".dep"]
make.addRule (rule) ;
#--------------------------------------------------------------------------- Add EXECUTABLE link rule
EXECUTABLE = self.mExecutable + self.mExecutableSuffix
rule = makefile.Rule ([EXECUTABLE], self.mLinkingMessage + ": " + EXECUTABLE)
rule.mOnErrorDeleteTarget = True
rule.deleteTargetFileOnClean ()
rule.mDependences += objectFileList
rule.mDependences.append (self.mJSONfilePath)
rule.mCommand += self.mLinkerTool
rule.mCommand += objectFileList
rule.mCommand += ["-o", EXECUTABLE]
rule.mCommand += self.mLinkerOptions
postCommand = makefile.PostCommand (self.mStripMessage + " " + EXECUTABLE)
postCommand.mCommand += self.mStripTool
postCommand.mCommand.append (EXECUTABLE)
rule.mPostCommands.append (postCommand)
rule.mPriority = 1
make.addRule (rule) ;
#--------------------------------------------------------------------------- Add Compile rule for sources (debug)
#--- Object file directory
debugObjectDirectory = "../build/cli-objects/makefile-" + self.mTargetName + "-debug-objects"
#---
debugObjectFileList = []
for source in SOURCES:
objectFile = debugObjectDirectory + "/" + source + ".o"
debugObjectFileList.append (objectFile)
sourcePath = make.searchFileInDirectories (source, SOURCES_DIR)
if sourcePath != "" :
extension = os.path.splitext (source) [1]
rule = makefile.Rule ([objectFile], self.mCompilationMessage + " (debug): " + source)
rule.deleteTargetDirectoryOnClean ()
rule.mDependences.append (sourcePath)
rule.enterSecondaryDependanceFile (objectFile + ".dep", make)
rule.mCommand += self.mCompilerTool
rule.mCommand += self.mCompilerDebugOptions
rule.mCommand += self.mAllCompilerOptions
if extension == ".c":
rule.mCommand += self.m_C_CompilerOptions
elif extension == ".cpp":
rule.mCommand += self.m_Cpp_CompilerOptions
rule.mCommand += ["-c", sourcePath]
rule.mCommand += ["-o", objectFile]
rule.mCommand += includeDirs
rule.mCommand += ["-MD", "-MP", "-MF", objectFile + ".dep"]
make.addRule (rule) ;
#--------------------------------------------------------------------------- Add EXECUTABLE_DEBUG link rule
EXECUTABLE_DEBUG = self.mExecutable + "-debug" + self.mExecutableSuffix
rule = makefile.Rule ([EXECUTABLE_DEBUG], self.mLinkingMessage + " (debug): " + EXECUTABLE_DEBUG)
rule.mOnErrorDeleteTarget = True
rule.deleteTargetFileOnClean ()
rule.mDependences += debugObjectFileList
rule.mDependences.append (self.mJSONfilePath)
rule.mCommand += self.mLinkerTool
rule.mCommand += debugObjectFileList
rule.mCommand += ["-o", EXECUTABLE_DEBUG]
rule.mCommand += self.mLinkerOptions
make.addRule (rule) ;
#--------------------------------------------------------------------------- Add Compile rule for sources (lto)
#--- Object file directory
objectLTODirectory = "../build/cli-objects/makefile-" + self.mTargetName + "-objects-lto"
#---
ltoObjectFileList = []
for source in SOURCES:
objectFile = objectLTODirectory + "/" + source + ".o"
ltoObjectFileList.append (objectFile)
sourcePath = make.searchFileInDirectories (source, SOURCES_DIR)
if sourcePath != "" :
extension = os.path.splitext (source) [1]
rule = makefile.Rule ([objectFile], self.mCompilationMessage + " (lto): " + source)
rule.deleteTargetDirectoryOnClean ()
rule.mDependences.append (sourcePath)
rule.enterSecondaryDependanceFile (objectFile + ".dep", make)
rule.mCommand += self.mCompilerTool
rule.mCommand += self.mCompilerReleaseOptions
rule.mCommand += self.mAllCompilerOptions
rule.mCommand += ["-flto"]
if extension == ".c":
rule.mCommand += self.m_C_CompilerOptions
elif extension == ".cpp":
rule.mCommand += self.m_Cpp_CompilerOptions
rule.mCommand += ["-c", sourcePath]
rule.mCommand += ["-o", objectFile]
rule.mCommand += includeDirs
rule.mCommand += ["-MD", "-MP", "-MF", objectFile + ".dep"]
make.addRule (rule) ;
#--------------------------------------------------------------------------- Add EXECUTABLE link rule
EXECUTABLE_LTO = self.mExecutable + "-lto" + self.mExecutableSuffix
rule = makefile.Rule ([EXECUTABLE_LTO], self.mLinkingMessage + ": " + EXECUTABLE_LTO)
rule.mOnErrorDeleteTarget = True
rule.deleteTargetFileOnClean ()
rule.mDependences += ltoObjectFileList
rule.mDependences.append (self.mJSONfilePath)
rule.mCommand += self.mLinkerTool
rule.mCommand += ltoObjectFileList
rule.mCommand += ["-o", EXECUTABLE_LTO]
rule.mCommand += self.mLinkerOptions
rule.mCommand += ["-flto"]
postCommand = makefile.PostCommand (self.mStripMessage + " " + EXECUTABLE_LTO)
postCommand.mCommand += self.mStripTool
postCommand.mCommand.append (EXECUTABLE_LTO)
rule.mPostCommands.append (postCommand)
rule.mPriority = 1
make.addRule (rule) ;
#--------------------------------------------------------------------------- Add install EXECUTABLE file rule
if len (self.mSudoTool) > 0:
INSTALL_EXECUTABLE = "/usr/local/bin/" + EXECUTABLE
rule = makefile.Rule ([INSTALL_EXECUTABLE], self.mInstallationgMessage + ": " + INSTALL_EXECUTABLE)
rule.mDependences.append (EXECUTABLE)
rule.mCommand += self.mSudoTool
rule.mCommand += ["cp", EXECUTABLE, INSTALL_EXECUTABLE]
make.addRule (rule) ;
#--------------------------------------------------------------------------- Add install EXECUTABLE-lto file rule
if len (self.mSudoTool) > 0:
INSTALL_EXECUTABLE_LTO = "/usr/local/bin/" + EXECUTABLE_LTO
rule = makefile.Rule ([INSTALL_EXECUTABLE_LTO], self.mInstallationgMessage + ": " + INSTALL_EXECUTABLE_LTO)
rule.mDependences.append (EXECUTABLE)
rule.mCommand += self.mSudoTool
rule.mCommand += ["cp", EXECUTABLE_LTO, INSTALL_EXECUTABLE_LTO]
make.addRule (rule) ;
#--------------------------------------------------------------------------- Add install EXECUTABLE-debug file rule
if len (self.mSudoTool) > 0:
INSTALL_EXECUTABLE_DEBUG = "/usr/local/bin/" + EXECUTABLE_DEBUG
rule = makefile.Rule ([INSTALL_EXECUTABLE_DEBUG], self.mInstallationgMessage + " (debug): " + INSTALL_EXECUTABLE_DEBUG)
rule.mDependences.append (INSTALL_EXECUTABLE_DEBUG)
rule.mCommand += self.mSudoTool
rule.mCommand += ["cp", EXECUTABLE_DEBUG, INSTALL_EXECUTABLE_DEBUG]
make.addRule (rule) ;
#--------------------------------------------------------------------------- Compute jobs
# make.printRules ()
make.addGoal ("all", [EXECUTABLE, EXECUTABLE_DEBUG], "Build " + EXECUTABLE + " and " + EXECUTABLE_DEBUG)
make.addGoal ("debug", [EXECUTABLE_DEBUG], "Build " + EXECUTABLE_DEBUG)
make.addGoal ("release", [EXECUTABLE], "Build " + EXECUTABLE)
make.addGoal ("lto", [EXECUTABLE_LTO], "Build " + EXECUTABLE_LTO)
if len (self.mSudoTool) > 0:
make.addGoal ("install-lto", [INSTALL_EXECUTABLE_LTO], "Build and install " + INSTALL_EXECUTABLE_LTO)
make.addGoal ("install-release", [INSTALL_EXECUTABLE], "Build and install " + INSTALL_EXECUTABLE)
make.addGoal ("install-debug", [INSTALL_EXECUTABLE_DEBUG], "Build and install " + INSTALL_EXECUTABLE_DEBUG)
#--------------------------------------------------------------------------- Run jobs
# make.printGoals ()
make.runGoal (self.mMaxParallelJobs, self.mDisplayCommands)
#--------------------------------------------------------------------------- Ok ?
make.printErrorCountAndExitOnError ()
displayDurationFromStartTime (startTime)
#----------------------------------------------------------------------------------------------------------------------*
|
We are another day closer to the point where we start building those special camps for Muslims.
The special kind with the shower and oven facilities.
The doctrine of Islam negates all of the Rights of Man. As a result those who adhere to Islam cannot claim the same rights for themselves. The false prophet Mohammed and those who follow him are without question Enemies of Mankind and as such can only be dealt with by deadly force.
Extermination and expulsion are the only options.
|
#!/usr/bin/env python
# Author:
# Arpit Gupta (arpitg@cs.princeton.edu)
import json
import os
from random import shuffle, randint
import sys
import argparse
def getMatchHash(part, peer_id, count):
return int(1 * part + 1 * peer_id + count)
def generatePoliciesParticipant(part, asn_2_ip, peers, frac, limit_out, cfg_dir):
# randomly select fwding participants
shuffle(peers)
count = int(frac * len(peers))
fwding_peers = set(peers[:count])
# Generate Outbound policies
cookie_id = 1
policy = {}
policy["outbound"] = []
for peer_id in fwding_peers:
peer_count = randint(1, limit_out)
for ind in range(1, peer_count+1):
tmp_policy = {}
# Assign Cookie ID
tmp_policy["cookie"] = cookie_id
cookie_id += 1
# Match
match_hash = getMatchHash(int(part), peer_id, ind)
tmp_policy["match"] = {}
tmp_policy["match"]["tcp_dst"] = match_hash
tmp_policy["match"]["in_port"] = asn_2_ip[part].values()[0]
# Action: fwd to peer's first port (visible to part)
tmp_policy["action"] = {"fwd": peer_id}
# Add this to participants' outbound policies
policy["outbound"].append(tmp_policy)
policy["inbound"] = []
inbound_count = randint(1, limit_out)
for ind in range(1, peer_count+1):
tmp_policy = {}
# Assign Cookie ID
tmp_policy["cookie"] = cookie_id
cookie_id += 1
# Match
match_hash = getMatchHash(int(part), 0, ind)
tmp_policy["match"] = {}
tmp_policy["match"]["tcp_dst"] = match_hash
# Action: fwd to peer's first port (visible to part)
tmp_policy["action"] = {"fwd": asn_2_ip[part].values()[0]}
# Add this to participants' outbound policies
policy["inbound"].append(tmp_policy)
# Dump the policies to appropriate directory
policy_filename = "participant_" + "AS" + part + ".py"
policy_file = cfg_dir + "policies/" + policy_filename
with open(policy_file,'w') as f:
json.dump(policy, f)
''' main '''
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument('cfg_dir', type=str, help='specify the config file directory, e.g. ./config/')
parser.add_argument('-f', '--frac', type=str, default='1.0', help='fraction of SDN fowarding peers')
args = parser.parse_args()
frac = float(args.frac)
asn_2_ip = json.load(open(args.cfg_dir + "asn_2_ip.json", 'r'))
asn_2_id = json.load(open(args.cfg_dir + "asn_2_id.json", 'r'))
config = json.load(open(args.cfg_dir + "sdx_global.cfg", "r"))
# Params
limit_out = 4
for part in asn_2_ip:
generatePoliciesParticipant(part, asn_2_ip, config["Participants"][str(asn_2_id[part])]["Peers"], frac, limit_out, args.cfg_dir)
|
The Natalie Brettschneider Archive is an ongoing series by Vancouver-based artist Carol Sawyer that features photographs, texts, a video, and music recitals to reconstruct the life and work of a historical genre-blurring performance artist. Brettschneider’s narrative is interwoven with references to people and places that Sawyer has uncovered in her process of research, and by the site-specific insertion of historical artworks and archival material. As a feminist critique of art historical narrative conventions, Sawyer’s project illuminates the persistent gaps and omissions of official histories, and the ways in which photographs are used to support cultural assumptions about gender, age, authorship, and art-making.
Sawyer is the second artist to be featured in CUAG’s Collection Invitational (CI) series. The CI series creates artist-led, open-ended opportunities to research and activate the collection through a week-long research residency at CUAG and subsequent exhibition. It stimulates the production of new artworks and fresh ways of seeing and thinking about the Carleton University collection.
Check out the video of Carol Sawyer’s CUAG performance at the opening reception on January 18th. We thank Landon Arbuckle and Lewis Gordon for their great work on this video!
CUAG acknowledges the support of Library and Archives Canada / Bibliothèque et Archives Canada and National Gallery of Canada / Musée des beaux-arts du Canada as lenders to the exhibition.
|
#!/usr/bin/py
#solved by FabioLima
#
#NameScript: binary.py
#
#Author and Maintaining: Fabio Lima
#
#-----------------------------------
#Description:
#
#
#-----------------------------------
#
#Example:
#
#
#-----------------------------------
#
#History
#
#v1.0 2017/02/08, FabioLima
#
#-----------------------------------
#
#License: GPL
#
import os,sys
sys.path.append(os.path.abspath('../../3.5/src/'))
from stack import Stack
class Stack(Stack):
def divideBy2(self,number):
test = Stack()
while number > 0:
if (number%2) == 1:
test.push(1)
else:
test.push(0)
number /= 2
seqBin = ''
while not test.isEmpty():
seqBin = seqBin + str(test.pop())
return seqBin
def baseConverter(self,decimalNumber,base):
digits = "0123456789ABCEF"
test = Stack()
while decimalNumber > 0:
remainder = int(decimalNumber) % int(base)
test.push(remainder)
decimalNumber = int(decimalNumber) // int(base)
answer = ''
while not test.isEmpty():
index = test.pop()
answer += digits[index]
return answer
|
The Premise: Lo and Rita are sisters, raised in the family circus; they never stay in one town for more than a few days, so their family is all they have. They’ll marry into the circus, keep performing for their whole lives: until Lo meets a boy whose life is entirely separate from everything she knows and she begins to question what she really wants.
Thoughts: Lisa Heathfield is a superb writer of YA novels; her previous works, Seed (about teens trapped in a cult) and Paper Butterflies (a heartbreaking story of abuse) both caused me to have the kind of emotions that you want to avoid experiencing in any kind of public space. Heathfield knows exactly how to construct a beautiful, touching and deceptively simple story in order to pull on the heartstrings of her readers and Flight of a Starling is no different.
I’m fascinated by stories set in circuses or involving performers, from Angela Carter’s novels to Nights at the Circus and The Lonely Hearts Hotel, and Flight of a Starling is a worthy addition to the list. The descriptions of the day-to-day life of the circus, as well as the performances and performers, are exquisitely detailed, creating a strong image for the reader. The relationships between the characters in the circus, centring on Lo and Rita but also their bonds with their parents, grandfather and friends are compelling and absorbing; it’s a relatively short book and only took me a few hours to red, but I still felt immersed in the story.
Like Kate Ling’s excellent The Loneliness of Distant Beings, Flight of a Starling explores the idea of a life predetermined by the choices of your parents, and even their parents before them; while Ling’s characters railed against living their whole lives enclosed in a spaceship, Lo comes to question whether she wants to spend her whole life in the circus, a feeling that springs from meeting Dean. Their romance is sweet and lacking in overblown melodrama; it shows Lo seeking a more ‘normal’ life, even the like of which others might try to escape.
In Conclusion: Lisa Heathfield should feature in any discussion of top YA writers; Flight of a Starling is yet another assured, emotive and well-executed narrative that packs a punch. I feel like it’s a story that will stay with me.
This week’s TTT, hosted by The Broke and the Bookish, is about our favourite books of the year so far. I’m going by what I’ve read this year as opposed to sticking solely to books published in 2017 (although most of these were). They’re not in order because that’s just too hard. Picking just 10 books from everything I’ve read this year was tricky enough!
I adored this book, its central character, the way it surprised me; you can read my review here. I’ve seen a lot of talk about it on Twitter too which makes me very happy.
This completely astonishing YA book has stayed in my brain all year; teen Mary and her messed-up story of being jailed for the murder of a baby is unlikely to leave me any time soon. Here’s my review.
I am in love with this beautiful book of inspirational women, and very happy to be reading it with my daughter for a second time.
I remain horrified that this didn’t make the Baileys shortlist. It’s a devastatingly gorgeous, sometimes traumatising story of two orphans and a circus, and that description in no way does it justice.
A really striking depiction of a small town in Jamaica, showing poverty, racism and family divisions. I really recommend this book.
This set of graphic novels depicts the Civil Rights Movement, from the perspective of longtime Congressman John Lewis, who played a leading role in the fight for equality. Everything about these books is outstanding; the art, the storytelling style and the way in which the facts are presented.
Hard to get into, but ultimately a very absorbing and epic story of horse-racing, prejudice and families. I still feel like this should have won the Baileys prize.
Excellent collection of poetry and prose, based around McNish’s experiences of pregnancy and motherhood. It’s all so relatable and real; I wish I’d had this when I was going through the early days of parenthood.
This is a sometimes disturbing but always compelling collection of short stories based around human rights. I’ll be using it at school next year in conjunction with Amnesty’s excellent lesson resources.
A superb mix of magical realism and topical coverage of the refugee crisis, this really grabbed my attention and pulled on my heartstrings. It’s a gorgeous book.
The Premise: a short novel about a family whose elevation from a cramped, unimpressive home to greater wealth and security brings more problems than they might have thought.
Thoughts: for such a slight novel (only 192 pages), there’s a lot brewing in Ghachar Ghochar, all dealt with in a brisk style yet somehow superbly developed. The narrator (unnamed, just to add to an ever-growing list of books that does this and thus makes my life difficult when it comes to reviewing) focuses on the different members of his family in a series of nuanced and subtle chapters, giving the reader a sense of really getting to know the various members of his believably peculiar family.
That’s all quite vague, isn’t it? The book begins with the narrator frequenting a coffee house and apparently desperate for guidance from a waiter, which is a fair indication of his general ennui – a feature repeated throughout, particularly in his barely-a-job occupation with the family business. It was this, combined with his wife’s astonished response upon discovering that the businessman she thought she had married was not entirely real, that brought Ghachar Ghochar to life for me. In an oddly Charlie and the Chocolate Factory way, the narrator and his wife share their home with his parents, uncle and sister, creating a claustrophobic atmosphere that creates tension and humour in equal measure; the section in which his mother bullies his uncle’s girlfriend on the doorstep was particularly entertaining.
In Conclusion: it’s a brief read but a really engaging and vibrant one. Ghachar Ghochar could have been twice the length and still just as entertaining and compelling, which is not something I would say about many books.
The Premise: after years on the run, Samuel Hawley returns to Olympus, Massachusetts to start a life with his daughter, Loo. But Hawley bears the scars of a dangerous life – literally, with bullet wounds riddling his body – that, it appears, is pretty difficult to outrun.
Thoughts: I won this book in a giveaway by the publisher on Twitter; if I hadn’t, I’m not sure that I would have picked it up, which would have been a shame. It’s an exciting and intriguing story; in hardback, it looks enormous and, in fairness, it is pretty long at nearly 500 pages, but the story whizzes past at such a rate that I didn’t really notice the length.
Tinti has neatly divided the book, with chapters telling the story of Loo’s life in Olympus, learning about her mother’s death and father’s life alternating with Hawley’s past, with each of these chapters focusing on how he got his bullet wounds. When different narratives interweave, I usually find myself with a strong preference for one or the other, but I enjoyed both the flashbacks and the present in The Twelve Lives of Samuel Hawley, especially the way the sections about the past led up to the present. It also helps to create the ambience of a thriller, particularly as Hawley’s criminal dealings become more dangerous and evident; I wouldn’t ordinarily read something in that genre, but this has made me think I should be more open-minded.
Tinti has a real gift for characterisation; I liked the small-town mentality of Olympus and how this was expressed through a cast of interesting, albeit largely not very pleasant characters. The shady characters of Hawley’s past are menacing without being caricatured, while Hawley himself is enigmatic and creepy. There are intriguing background subplots in the form of Loo’s relationship with a boy whose mother hates Loo and Hawley, as well as the connected subplot concerning the bitterness between the community’s fishermen and the campaign to restrict their activities. It all helps to build a rich and fascinating atmosphere.
In Conclusion: an excellent read all-round, The Twelve Lives of Samuel Hawley would be enjoyed by anyone who enjoys thrillers, mysteries or family sagas. It’s an expansive yet intimate novel which both entertains and unsettles.
|
# -*- coding: utf-8 -*-
# Copyright (C) 2013-2014 Avencall
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>
import re
class DialplanExecutionAnalyzer(object):
def analyze(self, dialplan_parse_result, log_parse_result):
line_analyses = self._do_lines_analyses(dialplan_parse_result, log_parse_result)
return _Analysis(dialplan_parse_result.filename, line_analyses)
def _do_lines_analyses(self, dialplan_parse_result, log_parse_result):
line_analyses = []
for line in dialplan_parse_result.lines:
is_executed = self._is_line_executed(line, log_parse_result, dialplan_parse_result)
line_analysis = _LineAnalysis(line.content, line.is_executable, is_executed)
line_analyses.append(line_analysis)
return line_analyses
def _is_line_executed(self, line, log_parse_result, dialplan_parse_result):
if not line.is_executable:
return False
elif line.extension.startswith('_'):
pattern = line.extension[1:]
for extension in log_parse_result.list_executed_extensions(line.context, line.priority):
if not dialplan_parse_result.has_extension(line.context, extension) and\
_is_extension_match_pattern(extension, pattern):
return log_parse_result.is_executed(line.context, extension, line.priority)
return False
else:
return log_parse_result.is_executed(line.context, line.extension, line.priority)
def _is_extension_match_pattern(extension, pattern):
regex_pattern = _convert_ast_pattern_to_regex_pattern(pattern)
if re.match(regex_pattern, extension):
return True
else:
return False
def _convert_ast_pattern_to_regex_pattern(ast_pattern):
regex_pattern_list = ['^']
index = 0
length = len(ast_pattern)
while index < length:
cur_char = ast_pattern[index]
if cur_char == 'X':
regex_pattern_list.append('[0-9]')
elif cur_char == 'Z':
regex_pattern_list.append('[1-9]')
elif cur_char == 'N':
regex_pattern_list.append('[2-9]')
elif cur_char == '[':
close_index = ast_pattern.find(']', index)
regex_pattern_list.append('[{}]'.format(ast_pattern[index:close_index]))
index += close_index
elif cur_char == '.':
regex_pattern_list.append('.+')
break
elif cur_char == '!':
regex_pattern_list.append('.*')
break
else:
regex_pattern_list.append(re.escape(cur_char))
index += 1
regex_pattern_list.append('$')
return ''.join(regex_pattern_list)
class _Analysis(object):
def __init__(self, filename, line_analyses):
self.filename = filename
self.line_analyses = line_analyses
class _LineAnalysis(object):
def __init__(self, content, is_executable, is_executed):
self.content = content
self.is_executable = is_executable
self.is_executed = is_executed
|
How to make a successful Sunsuper TPD claim?
Sunsuper Superannuation Fund (Sunsuper) is an Australian public offer industry superannuation fund based in Brisbane, Queensland, Australia. It was established in 1987 as a multi-industry superannuation fund open to all workers. Sunsuper is the largest superannuation fund by membership in Queensland, with over one million members and over 100,000 default employers. As at May 2018, it had more than A$55 billion in funds under management.
If you are a member of Sunsuper, depending on your age and other eligibility criteria, you will receive automatic Death Only and/or Death and Total & Permanent Disability (TPD) insurance. You may also have Income Protection (IP) insurance which pays you a monthly wage loss benefit based on a percentage of your salary for either 2 years or to age 65 depending on whether you elected additional insurance.
How do I make a Sunsuper TPD claim?
If you intend making a claim on your Sunsuper TPD and/or IP insurance, it's crucial to understand the conditions around claiming TPD and/or IP in Australia. The process from submitting a claim to receiving a benefit payment can be stressful and lengthy if you don't use a specialist TPD and IP lawyers. The large insurers use their own internal TPD and IP lawyers, so why wouldn’t you?
Firths are Australia’s leading TPD and IP lawyers and have built up a successful practice in prosecuting disability insurance claims. These successes have resulted in Firths obtaining countless decisions from super funds such as Sunsuper and its insurer in a timely manner to pay out a client's claim, or to proceed to a court hearing in appropriate circumstances.
The first step to make a Sunsuper TPD and/or IP claim is to contact Firths and arrange a suitable time for your free claim review. During the free consultation Firths expert TPD and/or lawyers will conduct a detailed review of your potential claim to ensure it meets the criteria for a viable Sunsuper TPD and/or IP claim.
If you have a viable Sunsuper TPD and/or IP claim, Firths will act on a fixed fee No Win No Fee basis meaning there is no charge until the end of the case and then only if successful and then only from the result. Firths will cover all the out of pocket expenses for you.
Was your Sunsuper TPD cover provided by default?
Default Sunsuper TPD policies usually only cover an any-occupation disability. Additional Sunsuper TPD policies may cover both own-occupation and any-occupation disability claims.
Your Sunsuper TPD cover will typically fall under own-occupation cover, although you may also have any-occupation cover if arranged in addition to your default policy. You'll need to meet the definition of your policy to be eligible for a claim.
If you’re not a TPD lawyer, the process to make a Sunsuper TPD claim may seem overwhelming. Don’t give up, take up Firths free no-obligation Sunsuper TPD claim assessment.
|
from __future__ import absolute_import, division, print_function, unicode_literals
import time
import os.path
import hashlib
from nmt_chainer.utilities.argument_parsing_tools import OrderedNamespace
##########################################
# A function to compute the hash of a file
# Taken from http://stackoverflow.com/questions/3431825/generating-an-md5-checksum-of-a-file
#
def hash_bytestr_iter(bytesiter, hasher, ashexstr=False):
for block in bytesiter:
hasher.update(block)
return (hasher.hexdigest() if ashexstr else hasher.digest())
def file_as_blockiter(afile, blocksize=65536):
with afile:
block = afile.read(blocksize)
while len(block) > 0:
yield block
block = afile.read(blocksize)
def compute_hash_of_file(filename):
return hash_bytestr_iter(file_as_blockiter(open(filename, 'rb')), hashlib.sha256(), ashexstr = True)
def create_filename_infos(model_filename):
model_infos = OrderedNamespace()
model_infos["path"] = model_filename
model_infos["last_modif"] = time.ctime(os.path.getmtime(model_filename))
model_infos["hash"] = compute_hash_of_file(model_filename)
return model_infos
|
The Desert Boot walks the line between smart and casual dress, from inner-city to coast & country. Crafted from soft suede, complimented with a padded insole and set on a dual-leather and rubber sole for extra grip, the Desert Boot is a celebration of the classic heritage and versatility of this everyday shoe.
The Desert Boot - Chocolate walks the line between smart and casual dress, from city to country. Crafted from soft-brushed suede, complimented with a padded insole and set on a dual-leather and rubber sole for extra grip, our Desert Boot celebrates the heritage and versatility of the everyday shoe.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.