text
stringlengths 29
850k
|
|---|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import models, migrations
class Migration(migrations.Migration):
dependencies = [
]
operations = [
migrations.CreateModel(
name='Folderbla',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=100)),
],
options={
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Langage',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('langage', models.CharField(max_length=16)),
],
options={
},
bases=(models.Model,),
),
migrations.CreateModel(
name='Script',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('Published', models.DateTimeField(auto_now_add=True)),
('Last edit', models.DateTimeField()),
('Title', models.CharField(max_length=100)),
('Content', models.TextField()),
('folder', models.ForeignKey(to='scripts.Folderbla')),
('langage', models.ForeignKey(to='scripts.Langage')),
],
options={
},
bases=(models.Model,),
),
]
|
We were all crammed into the elegant, but smallish, office looking at molds of teeth. X-rays of teeth. Pictures of teeth. The orthodontist was giving me information about my oldest son's overbite and told me one side of his teeth lined up, but uh-oh the other side didn't. How is that even possible? I listened carefully. But all the technical bite information and the cost breakdown was making my mind a little foggy and my heart a little jumpy. It was all kinds of overwhelming.
It felt like it was just a few years ago, it was me in Dr. Seabold's chair praying that it would be the day my braces came off. How could it have been almost 22 years ago?
What would I say to my 16-year-old self? Wear your retainer!
My teeth were a mess growing up. A bad car accident and not wearing a seat belt left me with a broken jaw at age 5 and very complicated bite issues for the next 12 years. The day I got my braces off is in the top ten days of my life. I felt free---free from feeling insecure about my crooked teeth and hideous bite, free from getting food caught in my braces. It was a good day. I felt almost glamorous with my new straight, white teeth, like a girl in a toothpaste commercial. I smiled for a week straight, which is a big deal for a teenager.
But just like a teenage girl, I was a little too into other teenage girl stuff (like watching 90210 and writing in my diary) to take care of my teeth. Wear my retainer? Um, no thank you. Floss? Didn't I already mention 90210 was on?
The years went by fast and honestly my teeth habits didn't really get any better. I would floss right before my dentist appointment, if I remembered to make an appointment. Not until the last few years, not until I began teaching my own children to brush and floss and take care of their teeth, did I start paying attention to my own. Yes, not being a hypocrite is a very important chapter in the parenting manual. Be an example, a role model, it reads. So, I started brushing better and flossing more.
The last time I was at the dentist they talked to me about my gums receding. They told me it's irreversible and usually related to age. "Are you telling me I'm long in the tooth?," I asked after rinsing my mouth. "Like the phrase that means you're old?"
Being "long in the tooth" means I must have gained some wisdom along the way. Like the importance of taking care of my teeth. Like knowing when a floss is really, really good.
I've actually become a bit of a floss and tooth paste expert or connoisseur if you will. That is why I jumped at the opportunity to share information about the hands-down best floss out there--Oral-B® Glide. Their tag line should be "this floss will make you happy" because it just does.
This isn't your mom's flimsy, skinny little floss. Oral-B® Glide is serious floss that makes you feel like you are really making a difference every time you use it. It has been clinically proven to help reverse gingivitis. It also protects against cavities and bad breath.
Sitting with my four children looking at all the teeth brought it all back. The years I wished I'd flossed and brushed better and worn my retainer. My kids were wide-eyed and mesmerized with the teeth molds and the fancy braces.
"Any questions?" the orthodontist asked.
"No, I can't think of any right now," I said my mind racing.
"Yes, in the back," he replied, pointing behind me.
Apparently, my daughter had a question. She had her hand raised high up in the air waiting patiently to be called on.
"So, does he need braces or what?" she wanted a bottom line for her brother.
Bottom line is yes, he needs braces and we need to figure out how to pay for it. Bottom line is that when you know better, you do better. Bottom line is taking care of your teeth is important.
And here's the thing about taking care of teeth and flossing in particular, you can do it anywhere. There's no shame in flossing. Do it in your car, in your back yard, at the restaurant (preferably in the restroom, but whatever). With each floss you are taking care of yourself. You are a role model.
Oral-B® is hosting a Floss Face Contest. Take a picture of yourself flossing (I dare you to take the picture at the restaurant) and post it here bit.ly/FlossFace. Every first time entrant will receive a tote bag with Glide products (while supplies last) and every two weeks from now until July 7th, someone will win an iPad.
To get YOU started I'm giving away one package of six Oral-B® Glide flosses. Leave a comment here or on my Facebook page and a winner will be randomly chosen in one week from today.
Now go take care of your teeth. May the floss be with you. Ugh, sorry, I had to it was right there and too good.
I participated in this sponsored campaign for One2One Network. I received monetary compensation and product to facilitate my post, but all opinions stated are my own.
I. Hate. Flossing. But, last time I was at the dentist (and I was totally honest when they asked if I had been flossing. I just said no), he said that if I don't floss, a spot that is "not really anything" right now, will turn into "something that needs to be fixed." So, I've been trying. I really, really have. I'm good to floss 3-4 times a week these days. But it's better than zero, right?!
I am not really a big fan of flossing. In my busy schedules, it seems that I don’t have time to do it. But this post of yours seems a wakeup call. It made me realize the importance of flossing. Thanks a lot for sharing.
|
# -*- coding: utf-8 -*-
import os
from csv import excel
from django.db import transaction
from django.utils.translation import ugettext_lazy as _
from unicodecsv import DictReader
class excel_semicolon(excel):
delimiter = ';'
class RegistryMixin(object):
"""
Registers targets to uninstanciated class
"""
_registry = None
@classmethod
def register(cls, target):
if cls._registry is None:
cls._registry = {}
cls._registry[target.__name__] = target
@property
def registry(self):
registry = self._registry or {}
for line in registry.iteritems():
yield line
class BaseRunner(RegistryMixin):
"""
Base import runner class. All subclasses must define their own read() methods.
"""
NOT_STARTED = 0
IN_PROGRESS = 1
FAILED = 2
SUCCESS = 3
status_choices = {
NOT_STARTED: _('Not Started'),
IN_PROGRESS: _('In Progress'),
FAILED: _('Failed'),
SUCCESS: _('Success'),
}
_status = NOT_STARTED
def __init__(self, import_root, encoding, continue_on_error=True):
self.import_root = import_root
self.encoding = encoding
self.errors = {}
self.continue_on_error = continue_on_error
@property
def status(self):
return self.status_choices[self._status]
def _set_status(self, value):
self._status = value
def read(self, filepath):
"""
This field must be overriden by subclasses
"""
raise NotImplementedError
@transaction.atomic
def run(self):
self._set_status(self.IN_PROGRESS)
global_errors = {}
for name, cls in self.registry:
local_errors = []
filepath = os.path.join(self.import_root, cls._meta.filename)
for row in self.read(filepath):
modelimport = cls(row)
if modelimport.is_valid():
modelimport.save()
else:
local_errors.append(modelimport.errors)
self._set_status(self.FAILED)
if not self.continue_on_error:
break
global_errors[name] = local_errors
self.errors = global_errors
if self._status != self.FAILED:
self._set_status(self.SUCCESS)
class CsvRunner(BaseRunner):
def __init__(self, *args, **kwargs):
self.dialect = kwargs.pop('dialect', excel)
super(CsvRunner, self).__init__(*args, **kwargs)
def read(self, filepath):
with open(filepath, 'r') as f:
reader = DictReader(f, encoding=self.encoding, dialect=self.dialect)
for row in reader:
yield row
|
The folks over at Gemini were kind enough to send me the Sades Zelda go4zelda Hammer Super Bass 7.1 Surround Sound PC Gaming Headset (whew! what a mouthful!) to review. How does it stand up while listening to music, watching movies, and of course gaming? Read the full review inside!
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from django.db import migrations, models
from django.conf import settings
class Migration(migrations.Migration):
dependencies = [
migrations.swappable_dependency(settings.AUTH_USER_MODEL),
('events', '0008_add_collection_method'),
]
operations = [
migrations.CreateModel(
name='Booth',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('name', models.CharField(max_length=100, verbose_name='\u0627\u0633\u0645 \u0627\u0644\u0631\u0643\u0646')),
('event', models.ForeignKey(verbose_name=b'\xd8\xa7\xd9\x84\xd8\xad\xd8\xaf\xd8\xab', to='events.Event')),
],
),
migrations.CreateModel(
name='Vote',
fields=[
('id', models.AutoField(verbose_name='ID', serialize=False, auto_created=True, primary_key=True)),
('submission_date', models.DateTimeField(auto_now_add=True, verbose_name='\u062a\u0627\u0631\u064a\u062e \u0627\u0644\u062a\u0635\u0648\u064a\u062a')),
('is_deleted', models.BooleanField(default=False, verbose_name='\u0645\u062d\u0630\u0648\u0641\u061f')),
('booth', models.ForeignKey(verbose_name=b'\xd8\xa7\xd8\xb3\xd9\x85 \xd8\xa7\xd9\x84\xd8\xb1\xd9\x83\xd9\x86', to='events.Booth')),
('submitter', models.ForeignKey(related_name='event_booth_voter', verbose_name=b'\xd8\xa7\xd8\xb3\xd9\x85 \xd8\xa7\xd9\x84\xd9\x85\xd8\xb5\xd9\x88\xd8\xaa', blank=True, to=settings.AUTH_USER_MODEL, null=True)),
],
),
]
|
copper crusher manufacturers list . list of manufacturer of copper crusher. list of manufacturer of copper crusher miningbmwOrientalJaw Crusher,Jaw Crusher For Sales,Jaw Crusher The Oriental Jaw Crusher is widely used in mining, building materials, chemical industry, metallurgy and so on.list of manufacturer of copper.
Whether you are for group or individual sourcing, we will provide you with the latest technology and the comprehensive data of Chinese suppliers like Copper Ore Crusher factory list to enhance your sourcing performance in the business line of manufacturing & processing machinery.
· bowl liner cone crusher model rxl1010 b003 XINHAI. bowl mill crusher modelbowl crusher model r1 uk crusher usa bowl liner 2 micron mill Egypt manufacturer copper ore and stone crushing plant . bowl liner cone .
copper ore fine crusher manufacturer. Boiler Type. ... It's like a chromium crusher, and a zip grinder had a baby with a removable/replaceable screen. The curved sides in the collection jar, and the tapered lip on the inside make it really easy to get your ground herb out.
jaw crusher suppliers for copper mining offers 1520 jaw crusher pe250x400 products. About 99% of these are ... China Manufacturer gold mining stone crusher jaw, small rock jaw crushers pe250x400 ..... Rock Gold/Copper ore/Iron Ore Mineral Jaw Crusher. 1 Set (Min. Chat Online.
copper crusher,iron crusher,mining mill jaw crusher at tons per hour capacity; advantages of coarse crushing machine; different types of ash stone crushers; stone cladding suppliers south africa; National HI-TECH Industry Development Zone, Zhengzhou, China. Telephone : 0086-371-86162511.
copper ore crusher manufacturer in tanzania. copper ore crusher manufacturer in tanzaniat, Dec 13, 2015 Mobile rock crusher price, mobile rock crusher manufacturers, ZYM mobile maintenance, .
copper ore crusher manufacturer india. copper ore crusher manufacturer india offers 17109 copper ore for sale products. About 24% of these are mineral separator, 15% are mine mill, and 13% are crusher.
copper crusher supplier in south africa pz . copper mobile crusher supplier in south africawhat is a portable concrete crusher in south africa quora 20 may 2016 sep 30, 2013 used stone crusher plant in south africa crusher machine for sale in south africa,stone crusher a is a professional stone crusher machine manufacturer and suppliebeneficiation of copper ores copper south africa .
There are 3,866 copper wire crusher suppliers, mainly located in Asia. The top supplying countries are China (Mainland), Taiwan, and Bulgaria, which supply 99%, 1%, and 1% of copper wire crusher respectively. Copper wire crusher products are most popular in Africa, Southeast Asia, and Mid East.
of the rimfire firing-pin indent copper crusher and how an unusual chain of events almost led to the disappearance of this simple but important technology. ... manufacturer of high-purity copper and began supplying the copper bars used in the copper-crusher measuring system(s).
copper cone crusher manufacturer in angola Copper Portable Crusher Provider In Angola copper portable crusher manufacturer in south africa more in, copper crusher in angola. copper crusher in angola, Leading mobile stone crusher and mill manufacturer in, crusher,Mobile jaw crusher,cone Continue reading » Copper Beneficiation Plant .
|
# Copyright (c) 2008-2021 the MRtrix3 contributors.
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
#
# Covered Software is provided under this License on an "as is"
# basis, without warranty of any kind, either expressed, implied, or
# statutory, including, without limitation, warranties that the
# Covered Software is free of defects, merchantable, fit for a
# particular purpose or non-infringing.
# See the Mozilla Public License v. 2.0 for more details.
#
# For more details, see http://www.mrtrix.org/.
import collections, itertools, os, shlex, signal, string, subprocess, sys, tempfile, threading
from distutils.spawn import find_executable
from mrtrix3 import ANSI, BIN_PATH, COMMAND_HISTORY_STRING, EXE_LIST, MRtrixBaseError, MRtrixError
from mrtrix3.utils import STRING_TYPES
IOStream = collections.namedtuple('IOStream', 'handle filename')
class Shared(object):
# A class for storing all information related to a running process
# This includes:
# - The process itself (so that a terminate signal can be sent if required)
# - For both stdout and stderr:
# - File handle (so that if not an internal pipe, the file can be closed)
# - File name (so that if not an internal pipe, it can be deleted from the file system)
# (these are encapsulated in named tuple "IOStream"
# Note: Input "stdin" should be a stream handle only, or None;
# "stdout" and "stderr" should be of type "IOStream"
class Process(subprocess.Popen):
def __init__(self, cmd, stdin, stdout, stderr, **kwargs):
assert isinstance(stdout, IOStream) or stdout is None
assert isinstance(stderr, IOStream) or stderr is None
my_kwargs = kwargs.copy()
my_kwargs['stdin'] = stdin
my_kwargs['stdout'] = stdout.handle if stdout else None
my_kwargs['stderr'] = stderr.handle if stderr else None
super(Shared.Process, self).__init__(cmd, **my_kwargs)
self.iostreams = (stdout, stderr)
def __init__(self):
# If the main script has been executed in an SGE environment, don't allow
# sub-processes to themselves fork SGE jobs; but if the main script is
# itself not an SGE job ('JOB_ID' environment variable absent), then
# whatever run.command() executes can send out SGE jobs without a problem.
self.env = os.environ.copy()
if self.env.get('SGE_ROOT') and self.env.get('JOB_ID'):
del self.env['SGE_ROOT']
# If environment variable is set, should apply to invoked script,
# but not to any underlying invoked commands
try:
self.env.pop('MRTRIX_QUIET')
except KeyError:
pass
self.env['MRTRIX_LOGLEVEL'] = 1
# Flagged by calling the set_continue() function;
# run.command() and run.function() calls will be skipped until one of the inputs to
# these functions matches the given string
self._last_file = ''
self.lock = threading.Lock()
self._num_threads = None
# Store executing processes so that they can be killed appropriately on interrupt;
# e.g. by the signal handler in the mrtrix3.app module
# Each sequential execution of run.command() either selects the first index for which the value is None,
# or extends the length of the list by 1, and uses this index as a unique identifier (within its own lifetime);
# each item is then itself a list of Process class instances required for that command string
self.process_lists = [ ]
self._scratch_dir = None
self.verbosity = 1
# Acquire a unique index
# This ensures that if command() is executed in parallel using different threads, they will
# not interfere with one another; but terminate() will also have access to all relevant data
def get_command_index(self):
with self.lock:
try:
index = next(i for i, v in enumerate(self.process_lists) if v is None)
self.process_lists[index] = [ ]
except StopIteration:
index = len(self.process_lists)
self.process_lists.append([ ])
return index
def close_command_index(self, index):
with self.lock:
assert index < len(self.process_lists)
assert self.process_lists[index]
self.process_lists[index] = None
# Wrap tempfile.mkstemp() in a convenience function, which also catches the case
# where the user does not have write access to the temporary directory location
# selected by default by the tempfile module, and in that case re-runs mkstemp()
# manually specifying an alternative temporary directory
def make_temporary_file(self):
try:
return IOStream(*tempfile.mkstemp())
except OSError:
return IOStream(*tempfile.mkstemp('', 'tmp', self._scratch_dir if self._scratch_dir else os.getcwd()))
def set_continue(self, filename): #pylint: disable=unused-variable
self._last_file = filename
def get_continue(self):
return bool(self._last_file)
# New function for handling the -continue command-line option functionality
# Check to see if the last file produced in the previous script execution is
# intended to be produced by this command; if it is, this will be the last
# thing that gets skipped by the -continue option
def trigger_continue(self, entries):
assert self.get_continue()
for entry in entries:
# It's possible that the file might be defined in a '--option=XXX' style argument
# It's also possible that the filename in the command string has the file extension omitted
if entry.startswith('--') and '=' in entry:
totest = entry.split('=')[1]
else:
totest = entry
if totest in [ self._last_file, os.path.splitext(self._last_file)[0] ]:
self._last_file = ''
return True
return False
def get_num_threads(self):
return self._num_threads
def set_num_threads(self, value):
assert value is None or (isinstance(value, int) and value >= 0)
self._num_threads = value
if value is not None:
# Either -nthreads 0 or -nthreads 1 should result in disabling multi-threading
external_software_value = 1 if value <= 1 else value
self.env['ITK_GLOBAL_NUMBER_OF_THREADS'] = str(external_software_value)
self.env['OMP_NUM_THREADS'] = str(external_software_value)
def get_scratch_dir(self):
return self._scratch_dir
def set_scratch_dir(self, path):
self.env['MRTRIX_TMPFILE_DIR'] = path
self._scratch_dir = path
# Controls verbosity of invoked MRtrix3 commands, as well as whether or not the
# stderr outputs of invoked commands are propagated to the terminal instantly or
# instead written to a temporary file for read on completion
def set_verbosity(self, verbosity):
assert isinstance(verbosity, int)
self.verbosity = verbosity
self.env['MRTRIX_LOGLEVEL'] = str(max(1, verbosity-1))
# Terminate any and all running processes, and delete any associated temporary files
def terminate(self, signum): #pylint: disable=unused-variable
with self.lock:
for process_list in self.process_lists:
if process_list:
for process in process_list:
if process:
# No need to propagate signals if we're in a POSIX-compliant environment
# and SIGINT has been received; that will propagate to children automatically
if sys.platform == 'win32':
process.send_signal(getattr(signal, 'CTRL_BREAK_EVENT'))
process.communicate(timeout=1) # Flushes the I/O buffers, unlike wait()
elif signum != signal.SIGINT:
process.terminate()
process.communicate(timeout=1)
for stream in process.iostreams:
if stream:
if stream.handle != subprocess.PIPE:
try:
os.close(stream.handle)
except OSError:
pass
if stream.filename is not None:
try:
os.remove(stream.filename)
except OSError:
pass
stream = None
process = None
process_list = None
self.process_lists = [ ]
shared = Shared() #pylint: disable=invalid-name
class MRtrixCmdError(MRtrixBaseError):
def __init__(self, cmd, code, stdout, stderr):
super(MRtrixCmdError, self).__init__('Command failed')
self.command = cmd
self.returncode = code
self.stdout = stdout
self.stderr = stderr
def __str__(self):
return self.stdout + self.stderr
class MRtrixFnError(MRtrixBaseError):
def __init__(self, fn, text):
super(MRtrixFnError, self).__init__('Function failed')
self.function = fn
self.errortext = text
def __str__(self):
return self.errortext
CommandReturn = collections.namedtuple('CommandReturn', 'stdout stderr')
def command(cmd, **kwargs): #pylint: disable=unused-variable
from mrtrix3 import app, path #pylint: disable=import-outside-toplevel
global shared #pylint: disable=invalid-name
def quote_nonpipe(item):
return item if item == '|' else path.quote(item)
shell = kwargs.pop('shell', False)
show = kwargs.pop('show', True)
mrconvert_keyval = kwargs.pop('mrconvert_keyval', None)
force = kwargs.pop('force', False)
env = kwargs.pop('env', shared.env)
if kwargs:
raise TypeError('Unsupported keyword arguments passed to run.command(): ' + str(kwargs))
if shell and mrconvert_keyval:
raise TypeError('Cannot use "mrconvert_keyval=" parameter in shell mode')
subprocess_kwargs = {}
if sys.platform == 'win32':
subprocess_kwargs['creationflags'] = subprocess.CREATE_NEW_PROCESS_GROUP
if shell:
subprocess_kwargs['shell'] = True
subprocess_kwargs['env'] = env
if isinstance(cmd, list):
if shell:
raise TypeError('When using run.command() with shell=True, input must be a text string')
cmdstring = ''
cmdsplit = []
for entry in cmd:
if isinstance(entry, STRING_TYPES):
cmdstring += (' ' if cmdstring else '') + quote_nonpipe(entry)
cmdsplit.append(entry)
elif isinstance(entry, list):
assert all(isinstance(item, STRING_TYPES) for item in entry)
if len(entry) > 1:
common_prefix = os.path.commonprefix(entry)
common_suffix = os.path.commonprefix([i[::-1] for i in entry])[::-1]
if common_prefix == entry[0] and common_prefix == common_suffix:
cmdstring += (' ' if cmdstring else '') + '[' + entry[0] + ' (x' + str(len(entry)) + ')]'
else:
cmdstring += (' ' if cmdstring else '') + '[' + common_prefix + '*' + common_suffix + ' (' + str(len(entry)) + ' items)]'
else:
cmdstring += (' ' if cmdstring else '') + quote_nonpipe(entry[0])
cmdsplit.extend(entry)
else:
raise TypeError('When run.command() is provided with a list as input, entries in the list must be either strings or lists of strings')
elif isinstance(cmd, STRING_TYPES):
cmdstring = cmd
# Split the command string by spaces, preserving anything encased within quotation marks
if os.sep == '/': # Cheap POSIX compliance check
cmdsplit = shlex.split(cmd)
else: # Native Windows Python
cmdsplit = [ entry.strip('\"') for entry in shlex.split(cmd, posix=False) ]
else:
raise TypeError('run.command() function only operates on strings, or lists of strings')
if shared.get_continue():
if shared.trigger_continue(cmdsplit):
app.debug('Detected last file in command \'' + cmdstring + '\'; this is the last run.command() / run.function() call that will be skipped')
if shared.verbosity:
sys.stderr.write(ANSI.execute + 'Skipping command:' + ANSI.clear + ' ' + cmdstring + '\n')
sys.stderr.flush()
return CommandReturn('', '')
# If operating in shell=True mode, handling of command execution is significantly different:
# Unmodified command string is executed using subprocess, with the shell being responsible for its parsing
# Only a single process per run.command() invocation is possible (since e.g. any piping will be
# handled by the spawned shell)
this_process_list = [ ]
if shell:
cmdstack = [ cmdsplit ]
with shared.lock:
app.debug('To execute: ' + str(cmdsplit))
if (shared.verbosity and show) or shared.verbosity > 1:
sys.stderr.write(ANSI.execute + 'Command:' + ANSI.clear + ' ' + cmdstring + '\n')
sys.stderr.flush()
# No locking required for actual creation of new process
this_stdout = shared.make_temporary_file()
this_stderr = IOStream(subprocess.PIPE, None) if shared.verbosity > 1 else shared.make_temporary_file()
this_process_list.append(shared.Process(cmdstring, None, this_stdout, this_stderr, **subprocess_kwargs))
else: # shell=False
# Need to identify presence of list constructs && or ||, and process accordingly
try:
(index, operator) = next((i,v) for i,v in enumerate(cmdsplit) if v in [ '&&', '||' ])
# If operator is '&&', next command should be executed only if first is successful
# If operator is '||', next command should be executed only if the first is not successful
try:
pre_result = command(cmdsplit[:index])
if operator == '||':
with shared.lock:
app.debug('Due to success of "' + cmdsplit[:index] + '", "' + cmdsplit[index+1:] + '" will not be run')
return pre_result
except MRtrixCmdError:
if operator == '&&':
raise
return command(cmdsplit[index+1:])
except StopIteration:
pass
# This splits the command string based on the piping character '|', such that each
# individual executable (along with its arguments) appears as its own list
cmdstack = [ list(g) for k, g in itertools.groupby(cmdsplit, lambda s : s != '|') if k ]
if mrconvert_keyval:
if cmdstack[-1][0] != 'mrconvert':
raise TypeError('Argument "mrconvert_keyval=" can only be used if the mrconvert command is being invoked')
assert not (mrconvert_keyval[0] in [ '\'', '"' ] or mrconvert_keyval[-1] in [ '\'', '"' ])
cmdstack[-1].extend([ '-copy_properties', mrconvert_keyval ])
if COMMAND_HISTORY_STRING:
cmdstack[-1].extend([ '-append_property', 'command_history', COMMAND_HISTORY_STRING ])
for line in cmdstack:
is_mrtrix_exe = line[0] in EXE_LIST
if is_mrtrix_exe:
line[0] = version_match(line[0])
if shared.get_num_threads() is not None:
line.extend( [ '-nthreads', str(shared.get_num_threads()) ] )
if force:
line.append('-force')
else:
line[0] = exe_name(line[0])
shebang = _shebang(line[0])
if shebang:
if not is_mrtrix_exe:
# If a shebang is found, and this call is therefore invoking an
# interpreter, can't rely on the interpreter finding the script
# from PATH; need to find the full path ourselves.
line[0] = find_executable(line[0])
for item in reversed(shebang):
line.insert(0, item)
with shared.lock:
app.debug('To execute: ' + str(cmdstack))
if (shared.verbosity and show) or shared.verbosity > 1:
sys.stderr.write(ANSI.execute + 'Command:' + ANSI.clear + ' ' + cmdstring + '\n')
sys.stderr.flush()
this_command_index = shared.get_command_index()
# Execute all processes for this command string
for index, to_execute in enumerate(cmdstack):
# If there's at least one command prior to this, need to receive the stdout from the prior command
# at the stdin of this command; otherwise, nothing to receive
this_stdin = this_process_list[index-1].stdout if index > 0 else None
# If this is not the last command, then stdout needs to be piped to the next command;
# otherwise, write stdout to a temporary file so that the contents can be read later
this_stdout = IOStream(subprocess.PIPE, None) if index<len(cmdstack)-1 else shared.make_temporary_file()
# If we're in debug / info mode, the contents of stderr will be read and printed to the terminal
# as the command progresses, hence this needs to go to a pipe; otherwise, write it to a temporary
# file so that the contents can be read later
this_stderr = IOStream(subprocess.PIPE, None) if shared.verbosity>1 else shared.make_temporary_file()
# Set off the process
try:
this_process_list.append(shared.Process(to_execute, this_stdin, this_stdout, this_stderr, **subprocess_kwargs))
# FileNotFoundError not defined in Python 2.7
except OSError as exception:
raise MRtrixCmdError(cmdstring, 1, '', str(exception))
# End branching based on shell=True/False
# Write process & temporary file information to globals, so that
# shared.terminate() can perform cleanup if required
this_command_index = shared.get_command_index()
with shared.lock:
shared.process_lists[this_command_index] = this_process_list
return_code = None
return_stdout = ''
return_stderr = ''
error = False
error_text = ''
# Wait for all commands to complete
# Switch how we monitor running processes / wait for them to complete
# depending on whether or not the user has specified -info or -debug option
if shared.verbosity > 1:
for process in this_process_list:
stderrdata = b''
do_indent = True
while True:
# Have to read one character at a time: Waiting for a newline character using e.g. readline() will prevent MRtrix progressbars from appearing
byte = process.stderr.read(1)
stderrdata += byte
char = byte.decode('cp1252', errors='ignore')
if not char and process.poll() is not None:
break
if do_indent and char in string.printable and char != '\r' and char != '\n':
sys.stderr.write(' ')
do_indent = False
elif char in [ '\r', '\n' ]:
do_indent = True
sys.stderr.write(char)
sys.stderr.flush()
stderrdata = stderrdata.decode('utf-8', errors='replace')
return_stderr += stderrdata
if not return_code: # Catch return code of first failed command
return_code = process.returncode
if process.returncode:
error = True
error_text += stderrdata
else:
for process in this_process_list:
process.wait()
if not return_code:
return_code = process.returncode
# For any command stdout / stderr data that wasn't either passed to another command or
# printed to the terminal during execution, read it here.
for process in this_process_list:
def finalise_temp_file(iostream):
os.close(iostream.handle)
with open(iostream.filename, 'rb') as stream:
contents = stream.read().decode('utf-8', errors='replace')
os.unlink(iostream.filename)
iostream = None
return contents
stdout_text = stderr_text = ''
if process.iostreams[0].filename is not None:
stdout_text = finalise_temp_file(process.iostreams[0])
return_stdout += stdout_text
if process.iostreams[1].filename is not None:
stderr_text = finalise_temp_file(process.iostreams[1])
return_stderr += stderr_text
if process.returncode:
error = True
error_text += stdout_text + stderr_text
# Get rid of any reference to the executed processes
shared.close_command_index(this_command_index)
this_process_list = None
if error:
raise MRtrixCmdError(cmdstring, return_code, return_stdout, return_stderr)
# Only now do we append to the script log, since the command has completed successfully
# Note: Writing the command as it was formed as the input to run.command():
# other flags may potentially change if this file is eventually used to resume the script
if shared.get_scratch_dir():
with shared.lock:
with open(os.path.join(shared.get_scratch_dir(), 'log.txt'), 'a') as outfile:
outfile.write(cmdstring + '\n')
return CommandReturn(return_stdout, return_stderr)
def function(fn_to_execute, *args, **kwargs): #pylint: disable=unused-variable
from mrtrix3 import app #pylint: disable=import-outside-toplevel
if not fn_to_execute:
raise TypeError('Invalid input to run.function()')
show = kwargs.pop('show', True)
fnstring = fn_to_execute.__module__ + '.' + fn_to_execute.__name__ + \
'(' + ', '.join(['\'' + str(a) + '\'' if isinstance(a, STRING_TYPES) else str(a) for a in args]) + \
(', ' if (args and kwargs) else '') + \
', '.join([key+'='+str(value) for key, value in kwargs.items()]) + ')'
if shared.get_continue():
if shared.trigger_continue(args) or shared.trigger_continue(kwargs.values()):
app.debug('Detected last file in function \'' + fnstring + '\'; this is the last run.command() / run.function() call that will be skipped')
if shared.verbosity:
sys.stderr.write(ANSI.execute + 'Skipping function:' + ANSI.clear + ' ' + fnstring + '\n')
sys.stderr.flush()
return None
if (shared.verbosity and show) or shared.verbosity > 1:
sys.stderr.write(ANSI.execute + 'Function:' + ANSI.clear + ' ' + fnstring + '\n')
sys.stderr.flush()
# Now we need to actually execute the requested function
try:
if kwargs:
result = fn_to_execute(*args, **kwargs)
else:
result = fn_to_execute(*args)
except Exception as exception: # pylint: disable=broad-except
raise MRtrixFnError(fnstring, str(exception))
# Only now do we append to the script log, since the function has completed successfully
if shared.get_scratch_dir():
with shared.lock:
with open(os.path.join(shared.get_scratch_dir(), 'log.txt'), 'a') as outfile:
outfile.write(fnstring + '\n')
return result
# When running on Windows, add the necessary '.exe' so that hopefully the correct
# command is found by subprocess
def exe_name(item):
from mrtrix3 import app, utils #pylint: disable=import-outside-toplevel
if not utils.is_windows():
path = item
elif item.endswith('.exe'):
path = item
elif os.path.isfile(os.path.join(BIN_PATH, item)):
path = item
elif os.path.isfile(os.path.join(BIN_PATH, item + '.exe')):
path = item + '.exe'
elif find_executable(item) is not None:
path = item
elif find_executable(item + '.exe') is not None:
path = item + '.exe'
# If it can't be found, return the item as-is; find_executable() fails to identify Python scripts
else:
path = item
app.debug(item + ' -> ' + path)
return path
# Make sure we're not accidentally running an MRtrix executable on the system that
# belongs to a different version of MRtrix3 to the script library currently being used,
# or a non-MRtrix3 command with the same name as an MRtrix3 command
# (e.g. C:\Windows\system32\mrinfo.exe; On Windows, subprocess uses CreateProcess(),
# which checks system32\ before PATH)
def version_match(item):
from mrtrix3 import app #pylint: disable=import-outside-toplevel
if not item in EXE_LIST:
app.debug('Command ' + item + ' not found in MRtrix3 bin/ directory')
return item
exe_path_manual = os.path.join(BIN_PATH, exe_name(item))
if os.path.isfile(exe_path_manual):
app.debug('Version-matched executable for ' + item + ': ' + exe_path_manual)
return exe_path_manual
exe_path_sys = find_executable(exe_name(item))
if exe_path_sys and os.path.isfile(exe_path_sys):
app.debug('Using non-version-matched executable for ' + item + ': ' + exe_path_sys)
return exe_path_sys
raise MRtrixError('Unable to find executable for MRtrix3 command ' + item)
# If the target executable is not a binary, but is actually a script, use the
# shebang at the start of the file to alter the subprocess call
def _shebang(item):
from mrtrix3 import app, utils #pylint: disable=import-outside-toplevel
# If a complete path has been provided rather than just a file name, don't perform any additional file search
if os.sep in item:
path = item
else:
path = version_match(item)
if path == item:
path = find_executable(exe_name(item))
if not path:
app.debug('File \"' + item + '\": Could not find file to query')
return []
# Read the first 1024 bytes of the file
with open(path, 'rb') as file_in:
data = file_in.read(1024)
# Try to find the shebang line
for line in data.splitlines():
# Are there any non-text characters? If so, it's a binary file, so no need to looking for a shebang
try:
line = str(line.decode('utf-8'))
except:
app.debug('File \"' + item + '\": Not a text file')
return []
line = line.strip()
if len(line) > 2 and line[0:2] == '#!':
# Need to strip first in case there's a gap between the shebang symbol and the interpreter path
shebang = line[2:].strip().split(' ')
# On Windows, /usr/bin/env can't be easily found, and any direct interpreter path will have a similar issue.
# Instead, manually find the right interpreter to call using distutils
# Also if script is written in Python, try to execute it using the same interpreter as that currently running
if os.path.basename(shebang[0]) == 'env':
if len(shebang) < 2:
app.warn('Invalid shebang in script file \"' + item + '\" (missing interpreter after \"env\")')
return []
if shebang[1] == 'python':
if not sys.executable:
app.warn('Unable to self-identify Python interpreter; file \"' + item + '\" not guaranteed to execute on same version')
return []
shebang = [ sys.executable ] + shebang[2:]
app.debug('File \"' + item + '\": Using current Python interpreter')
elif utils.is_windows():
shebang = [ os.path.abspath(find_executable(exe_name(shebang[1]))) ] + shebang[2:]
elif utils.is_windows():
shebang = [ os.path.abspath(find_executable(exe_name(os.path.basename(shebang[0])))) ] + shebang[1:]
app.debug('File \"' + item + '\": string \"' + line + '\": ' + str(shebang))
return shebang
app.debug('File \"' + item + '\": No shebang found')
return []
|
note 666 is unsecure, better would be to add user to the group, but i don't want to risk instabilities if I do not know how to do it: Any Linux master reading this text?
Accept the licence and let the java applet connect to the phone and download the latest rom for You.
- I was the force to remove the battery because my phone was temporarly blocked, after the restart, the phone complete the installation and it's finish.
|
import numpy as np
import matplotlib.pyplot as plt
import gym
import time
import copy
from keras.models import Sequential, Model
from keras.layers import Dense, Activation, Flatten, Lambda, Input, Reshape, concatenate, Merge
from keras.optimizers import Adam, RMSprop
from keras.callbacks import History
from keras import backend as K
import tensorflow as tf
from gym import Env, Space, spaces
from gym.utils import seeding
from rl.agents.dqn import DQNAgent
from rl.policy import BoltzmannQPolicy, EpsGreedyQPolicy
from rl.memory import SequentialMemory, EpisodeParameterMemory
from rl.agents.cem import CEMAgent
from rl.agents import SARSAAgent
from rl.callbacks import TrainEpisodeLogger, CallbackList
# env = gym.make('MountainCar-v0')
env = gym.make('CartPole-v1')
env.seed()
nb_actions = env.action_space.n
x = Input((1,) + env.observation_space.shape)
y = Flatten()(x)
y = Dense(16)(y)
y = Activation('relu')(y)
y = Dense(16)(y)
y = Activation('relu')(y)
y = Dense(16)(y)
y = Activation('relu')(y)
y = Dense(nb_actions)(y)
y = Activation('linear')(y)
model = Model(x, y)
memory = SequentialMemory(limit=10000, window_length=1)
# policy = BoltzmannQPolicy()
policy = EpsGreedyQPolicy()
dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=1000, gamma=.9,
enable_dueling_network=False, dueling_type='avg', target_model_update=1e-2, policy=policy)
# dqn = DQNAgent(model=model, nb_actions=nb_actions, memory=memory, nb_steps_warmup=10,
# enable_dueling_network=True, dueling_type='avg', target_model_update=1e-2, policy=policy)
dqn.compile(Adam(lr=.001, decay=.001), metrics=['mae'])
rewards = []
callback = [TrainEpisodeLogger(), History()]
hist = dqn.fit(env, nb_steps=10000, visualize=False, verbose=2, callbacks=None)
rewards.extend(hist.history.get('episode_reward'))
plt.plot(rewards)
dqn.test(env, nb_episodes=5, visualize=True)
state = env.reset()
action = env.action_space.sample()
print(action)
state_list= []
for i in range(300):
state_list.append(state)
# action = np.argmax(dqn.model.predict(np.expand_dims(np.expand_dims(state, 0), 0))[0])
state, reward, done, _ = env.step(2)
env.render()
env.render(close=True)
state_arr = np.array(state_list)
plt.plot(state_arr)
|
Browse trusted local Ground Work & Demolition in Lymm on TrustATrader, all vetted, with photos of completed work, and reviews from previous customers.
Ground Work & Demolition in Stafford, ST16 3HL.
Ground Work & Demolition in Leeds, LS1 3AJ.
Ground Work & Demolition in Belper, DE56 2JQ.
Ground Work & Demolition in Chester. Covering Chester, Deeside and surrounding areas.
Ground Work & Demolition in Stockport. Covering Cheshire, Stockport and South Manchester.
|
# encoding: utf-8
import datetime
from south.db import db
from south.v2 import SchemaMigration
from django.db import models
class Migration(SchemaMigration):
def forwards(self, orm):
# Adding field 'CronLog.exception_message'
db.add_column('cronjobs_cronlog', 'exception_message', self.gf('django.db.models.fields.TextField')(default=''), keep_default=False)
# Adding field 'CronLog.duration'
db.add_column('cronjobs_cronlog', 'duration', self.gf('django.db.models.fields.DecimalField')(default=0.0, max_digits=10, decimal_places=3), keep_default=False)
def backwards(self, orm):
# Deleting field 'CronLog.exception_message'
db.delete_column('cronjobs_cronlog', 'exception_message')
# Deleting field 'CronLog.duration'
db.delete_column('cronjobs_cronlog', 'duration')
models = {
'cronjobs.cron': {
'Meta': {'object_name': 'Cron'},
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'next_run': ('django.db.models.fields.DateTimeField', [], {'default': 'datetime.datetime(2010, 6, 7, 15, 42, 29, 846310)'}),
'type': ('django.db.models.fields.related.ForeignKey', [], {'to': "orm['cronjobs.CronType']"})
},
'cronjobs.cronlog': {
'Meta': {'object_name': 'CronLog'},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'duration': ('django.db.models.fields.DecimalField', [], {'default': '0.0', 'max_digits': '10', 'decimal_places': '3'}),
'exception_message': ('django.db.models.fields.TextField', [], {'default': "''"}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'success': ('django.db.models.fields.BooleanField', [], {'default': 'False', 'blank': 'True'}),
'timestamp': ('django.db.models.fields.DateTimeField', [], {'auto_now_add': 'True', 'blank': 'True'})
},
'cronjobs.crontype': {
'Meta': {'object_name': 'CronType'},
'app_label': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'cache_timeout': ('django.db.models.fields.PositiveIntegerField', [], {'default': '0'}),
'id': ('django.db.models.fields.AutoField', [], {'primary_key': 'True'}),
'name': ('django.db.models.fields.CharField', [], {'max_length': '255'}),
'run_every': ('django.db.models.fields.IntegerField', [], {'default': '86400'})
}
}
complete_apps = ['cronjobs']
|
A rectangle is divided in n·m squares of equal size, like a matrix with n rows and m columns. By straight cuts from border to border, along the sides of the squares, the rectangle shall be completely cut up, leaving n·m separate squares. The following picture shows for n = 4 and m = 6 the start and the end of the procedure and in the middle a possible situation after several cuts.
B : Before each cut pieces may be stacked or laid alongside each other, allowing cuts through more than one piece.
This cut (red) is legal in variant B , not in variant A .
If you happen to have the right idea for variant A you will easily find the solution, without any computation. But you can also find the solution by testing the possible ways to cut up small rectangles; the rule for determining the minimum number of cuts will become evident.
Concerning variant B only this shall here be said: The intuitive idea for fastest cutting is the best one.
|
class NumMatrix(object):
def __init__(self, matrix):
"""
initialize your data structure here.
:type matrix: List[List[int]]
"""
if len(matrix) == 0:
return None
self.m = [[0 for x in xrange(len(matrix[0])+1)] for x in xrange(len(matrix)+1)]
for i in xrange(len(matrix)):
for j in xrange(len(matrix[0])):
self.m[i+1][j+1] = self.m[i][j+1] + self.m[i+1][j] - self.m[i][j] + matrix[i][j]
def sumRegion(self, row1, col1, row2, col2):
"""
sum of elements matrix[(row1,col1)..(row2,col2)], inclusive.
:type row1: int
:type col1: int
:type row2: int
:type col2: int
:rtype: int
"""
return self.m[row2+1][col2+1] - self.m[row2+1][col1] - self.m[row1][col2+1] + self.m[row1][col1]
if __name__ == '__main__':
matrix = [
[3, 0, 1, 4, 2],
[5, 6, 3, 2, 1],
[1, 2, 0, 1, 5],
[4, 1, 0, 1, 7],
[1, 0, 3, 0, 5]
]
m2 = []
numMatrix = NumMatrix(matrix)
print numMatrix.sumRegion(1, 1, 2, 2)
print numMatrix.sumRegion(2, 1, 4, 3)
print numMatrix.sumRegion(1, 2, 2, 4)
|
Just like new windows, a beautiful new entranceway can have a significant effect on your Upminster home, transforming its appearance and increasing its value – as well as enhancing its energy efficiency and security. Your front door is the first feature of your home that guests and visitors will see, so it’s always a good idea to make an outstanding first impression. For this reason there are a number of decisions to be made when choosing a replacement door.
There’s no doubt about it – colour matters. It’s as much part of your windows and doors as the glass and the frame. Colour can change everything: it can be bold and forceful, or subtle and elegant. It can shout or whisper. It can blend in or stand out. Colour’s an expression of you and what you want your Upminster home to be.
Our range includes finishes that are timeless or traditional, beautiful woodgrain effects, accent colours and on-trend shades. Pick something to enhance your home’s building materials or to suit the Upminster scenery. Choose a different finish for indoors and outdoors. Communicate with colour and show off the real beauty of your Upminster home.
It’s important that locks, handles and hinges – even frames and glazing – do their job and protect you and your family. All Swish windows and doors conform to PAS 24: 2012 which covers enhanced security performance requirements for doors and windows in Upminster and in the UK. Look out for features like multi-point locking and egress hinges which allow a window to open to 90o for emergency escape.
You may want to consider products that have Secured By Design accreditation – this is an Association of Chief Police Officers’ initiative that aims to design out opportunities for crime in Upminster and the rest of the UK. These products have extra security features built in and are available from some of our Swish Authorised Installers.
All of our window and door profiles are covered by a ten year guarantee against manufacturing defects. Your Upminster installer will offer a guarantee on the installation. Make sure that this guarantee is “insurance-backed” and offered by a DTI approved insurer. This means that if the installer goes out of business – for any reason – you are still covered.
|
# Определяем начальные значения переменных
min_max = [0, 1000]
vvod = (min_max[0] + min_max[1]) / 2
v_text = {1:'нижний предел', 2:'верхний предел', 3:'отгадываемое число'}
# Создаем функцию проверки ввода
def prov_vvoda(vd_text):
global vvod
while True:
sp = input('Введите ' + v_text[vd_text] + ' (от ' + str(min_max[0]) + ', до ' + str(min_max[1]) + ')\nили нажмите "Enter" для выхода:')
if sp == '':
raise SystemExit(1)
elif not sp.isdigit():
print('Ну это же не положительное число!')
else:
vvod = int(sp)
if min_max[0] <= vvod <= min_max[1]:
break
else:
print('Число не соответствует условию (от ', min_max[0], ', до ', min_max[1], ')', sep = '')
# Вводим пределы min_max, проверяем ввод
prov_vvoda(1)
min_max[0] = vvod
prov_vvoda(2)
min_max[1] = vvod
# Генерируем случайное число в диапазоне min_max
import random
r_vopros = random.randint(min_max[0], min_max[1])
# Предлагаем отгадать, проверяем ввод
print('Компьютером загадано число. Попробуйте отгадать!')
i_step = 1
while True:
print('Попытка №', i_step, sep='')
prov_vvoda(3)
if r_vopros == vvod:
print('Отлично! Загаданное число "', vvod, '" Вы угадали с ', i_step, '-й попытки!', sep = '')
input('Нажмите "Enter" для выхода.')
raise SystemExit(1)
elif r_vopros > vvod:
if vvod > min_max[0]: min_max[0] = vvod
else:
if vvod < min_max[1]: min_max[1] = vvod
i_step += 1
input('Нажмите "Enter" для выхода.')
|
When I read the title Santa’s First Vegan Christmas, I thought that this was going to be a kids’ book about leaving Santa plant milk and vegan cookies under the tree. Boy was I wrong!
This cute, rhyming kid’s story actually tackles the very serious theme of animal labor. Specifically, it deals with the fact that Santa is using reindeer to pull his sleigh.
Luckily, a sweet little reindeer named Dana sets Santa straight. Realizing the errors of his ways, Santa decides to run his sleigh on magic power. Once he gets into the vegan spirit, he even decides to liberate all the caged pets and animals that Christmas too.
I’ve got to admit that I was worried this theme would be too intense for my 5-year old. Would it ruin Christmas for her if she started thinking of Santa’s reindeer as indentured servants instead of jolly volunteers?
I’m happy to say that the book did not ruin Christmas for my daughter, and we did have a nice discussion about whether it is fair to use animals like horses and reindeer for pulling sleighs and carriages.
What will she think the next time we hear the song “Rudolph the Red Nose Reindeer” playing? I’m not sure. But this book definitely highlights how our vegan values can often conflict with holiday traditions. The book is a fun, lighthearted way to introduce a really serious topic with your kids.
Aside from the intense underlying message, Santa’s First Vegan Christmas is a cute holiday story with rhymes that flow very well and quirky illustrations. I’ll definitely be reading it with my kid again when the holidays come around.
I love the sound of this book! There are so few vegan-friendly Christmas books. How exciting we now have one that’s explicitly pro-vegan, pro-animal rights!
|
# coding: utf-8
from .currencies import CURRENCIES
class Currency(object):
CURRENCIES = dict([(c[0], c[1:]) for c in CURRENCIES])
def __init__(self, iso_code):
if not iso_code in self.CURRENCIES:
raise TypeError('unknown currency (%s)' % iso_code)
self.iso_code = iso_code
self.iso_num, self.decimal_places, self.rounding, self.name, self.symbol = self.CURRENCIES[iso_code]
def __str__(self):
from django.utils.encoding import smart_str
return smart_str(unicode(self))
def __unicode__(self):
return self.iso_code
def __eq__(self, other):
if not isinstance(other, Currency):
# allow direct comparision to iso codes
if other in self.CURRENCIES:
return self.iso_code == other
return False
return self.iso_code == other.iso_code
def __ne__(self, other):
return not self == other
# django_ajax hook
def ajax_data(self):
return {
'iso_code': self.iso_code,
'name': self.name,
'symbol': self.symbol,
}
|
TRIO Sports has custom sailing gear, such as rash guards and pinnies/buoyancy covers, for club sailing, dingy sailing, yacht sailing, club sailing, scholastic sailing and more.
Check out our Custom Sailing Clothing Apparel catalog!
Rashguards, Buoyancy Covers/Pinnies, Performance Technical Shirts, Neck Gaiters and more! Everything you need for your team.
|
# -*- Mode: Python -*-
# [reworking of the version in Python-1.5.1/Demo/scripts/pi.py]
# Print digits of pi forever.
#
# The algorithm, using Python's 'long' integers ("bignums"), works
# with continued fractions, and was conceived by Lambert Meertens.
#
# See also the ABC Programmer's Handbook, by Geurts, Meertens & Pemberton,
# published by Prentice-Hall (UK) Ltd., 1990.
import string
StopException = "Stop!"
def go (file):
try:
k, a, b, a1, b1 = 2L, 4L, 1L, 12L, 4L
while 1:
# Next approximation
p, q, k = k*k, 2L*k+1L, k+1L
a, b, a1, b1 = a1, b1, p*a+q*a1, p*b+q*b1
# Print common digits
d, d1 = a/b, a1/b1
while d == d1:
if file.write (str(int(d))):
raise StopException
a, a1 = 10L*(a%b), 10L*(a1%b1)
d, d1 = a/b, a1/b1
except StopException:
return
class line_writer:
"partition the endless line into 80-character ones"
def __init__ (self, file, digit_limit=10000):
self.file = file
self.buffer = ''
self.count = 0
self.digit_limit = digit_limit
def write (self, data):
self.buffer = self.buffer + data
if len(self.buffer) > 80:
line, self.buffer = self.buffer[:80], self.buffer[80:]
self.file.write (line+'\r\n')
self.count = self.count + 80
if self.count > self.digit_limit:
return 1
else:
return 0
def main (env, stdin, stdout):
parts = string.split (env['REQUEST_URI'], '/')
if len(parts) >= 3:
ndigits = string.atoi (parts[2])
else:
ndigits = 5000
stdout.write ('Content-Type: text/plain\r\n\r\n')
go (line_writer (stdout, ndigits))
|
[Atmospheric pollution and bronchial asthma].
[Respiratory morbidity and atmospheric pollution].
[An ACTH-secreting pituitary macroadenoma: inhibitory effect of cyproheptadine and somatostatin].
[Leukocyte kinetics during hemodialysis: increase of leukocyte aggregation and chemotactic activity of the serum as etiologic factors in intradialytic leukopenia].
[Hemorrhagic cerebral infarct. Clinical characteristics and cranial computerized tomography].
[Results of the administration of sodium phosphate cellulose as medical treatment of primary hyperparathyroidism].
[Arterial hypertension in the hospital. The experience of the Hypertension Section of the Bellvitge-Princeps d'Espanya Hospital].
[Usefulness of bronchial aspiration in the diagnosis of pulmonary tuberculosis].
[Carbon tetrachloride poisoning: a new case].
|
__author__ = 'kevin'
from PySide.QtCore import QObject
from _plugin1__iplugin_test__dev_ import easy_import_ as ei
if ei.initialized:
from iplugin import IPlugin
from pluginmanager import PluginManager
class MyPlugin1(IPlugin):
def __init__(self, manager):
super(MyPlugin1, self).__init__(manager)
self.initializeCalled = False
def initialize(self, arguments):
self.initializeCalled = False
obj = QObject(self)
obj.setObjectName("MyPlugin1")
self.addAutoReleaseObject(obj)
found2 = False
found3 = False
for otherPluginObj in PluginManager.getInstance().allObjects():
if otherPluginObj.objectName() == "MyPlugin2":
found2 = True
elif otherPluginObj.objectName() == "MyPlugin3":
found3 = True
if found2 and found3:
return True, "No error"
errorString = "object(s) missing from plugin(s):"
if not found2:
errorString += "plugin2"
if not found3:
errorString += "plugin3"
return False, errorString
def extensionsInitialized(self):
if not self.initializeCalled:
return
obj = QObject(self)
obj.setObjectName("MyPlugin1_running")
self.addAutoReleaseObject(obj)
|
Getting younger pupils used to handling a scissors and cutting lines or patterns is a useful activity for fine motor skills. Pupils with special educational needs may also need some extra practice with this activity. This seasonal resource requires pupils to cut out the autumn-themed images and stick them into a copy.
|
import copy
from collections import OrderedDict
# Django Libraries
from django.conf import settings
from django.db import connections
# CloudScape Libraries
from cloudscape.common import logger
from cloudscape.common import config
from cloudscape.engine.api.app.host.models import DBHostDetails
# Maximum Rows per Table
MAX_ROWS = 360
class DBHostStats:
"""
Main database model for storing polling statistics for managed hosts. Each host has its own
table in the host statistics database.
"""
def __init__(self, uuid=None):
self.uuid = uuid
self.db = settings.DATABASES['host_stats']
self.dbh = connections['host_stats'].cursor()
# Configuration and logger
self.conf = config.parse()
self.log = logger.create(__name__, self.conf.server.log)
# Parameters validation flag
self.pv = True
# Create an ordered dictionary for the column names
self.columns = OrderedDict([
('uptime', 'VARCHAR(48)'),
('cpu_use', 'TEXT'),
('memory_use', 'TEXT'),
('memory_top', 'TEXT'),
('disk_use', 'TEXT'),
('disk_io', 'TEXT'),
('network_io', 'TEXT')])
# Make sure a UUID is specified
if not self.uuid:
self.pv = False
# Make sure the UUID is mapped to an existing host
if not DBHostDetails.objects.filter(uuid=self.uuid).count():
self.pv = False
def _table_init(self):
"""
Initialize the host statistics table.
"""
# Construct the columns string
col_str = ''
for name, data_type in self.columns.iteritems():
col_str += ',%s %s' % (name, data_type)
# Construct the table query
timestamp = 'created TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP'
init_query = 'CREATE TABLE IF NOT EXISTS `%s`.`%s`(%s%s)' % (self.db['NAME'], self.uuid, timestamp, col_str)
# Create the table
try:
self.dbh.execute(init_query)
self.log.info('Initialized host \'%s\' statistics table' % self.uuid)
return True
except Warning:
return True
except Exception as e:
self.log.error('Failed to initialize host \'%s\' statistics table: %s' % (self.uuid, e))
return False
def _construct_poll_query(self, params):
"""
Construct the polling data column query.
"""
col_names = ''
col_values = ''
for key, data_type in self.columns.iteritems():
col_names += '%s,' % key
col_values += '\'%s\',' % params[key]
col_names = col_names[:-1]
col_values = col_values[:-1]
# Construct and return the poll query
return 'INSERT INTO `%s`.`%s`(%s) VALUES(%s)' % (self.db['NAME'], self.uuid, col_names, col_values)
def _fetch_all(self, cursor):
"""
Retrieve all rows from a raw SQL query.
"""
desc = cursor.description
return [
dict(zip([col[0] for col in desc], row))
for row in cursor.fetchall()
]
def _count(self):
"""
Count the number of statistic rows being returned.
"""
self.dbh.execute('SELECT COUNT(*) FROM `%s`' % self.uuid)
result = self.dbh.fetchone()
return result[0]
def get(self, range=None, latest=None):
"""
Retrieve host statistics.
"""
# If getting the latest N rows
if latest:
query = 'SELECT * FROM `%s`.`%s` ORDER BY created DESC LIMIT %s' % (self.db['NAME'], self.uuid, latest)
self.dbh.execute(query)
rows = self._fetch_all(self.dbh)
# If returning a single row
if latest == 1:
return rows[0]
else:
return rows
# If provided a start/end range
if not 'start' in range or not 'end' in range:
return False
# Default range - last 60 entries
if not range['start'] and not range['end']:
query = 'SELECT * FROM `%s`.`%s` ORDER BY created DESC LIMIT 30' % (self.db['NAME'], self.uuid)
# Select all up until end
elif not range['start'] and range['end']:
query = 'SELECT * FROM `%s`.`%s` WHERE created < \'%s\'' % (self.db['NAME'], self.uuid, range['end'])
# Select from start date to end
elif range['start'] and not range['end']:
query = 'SELECT * FROM `%s`.`%s` WHERE created > \'%s\'' % (self.db['NAME'], self.uuid, range['start'])
# Select between two date ranges
else:
query = 'SELECT * FROM `%s`.`%s` WHERE (created BETWEEN \'%s\' AND %s)' % (self.db['NAME'], self.uuid, range['start'], range['end'])
# Select host statistics
try:
# Get the unsorted results
self.dbh.execute(query)
rows = self._fetch_all(self.dbh)
# Convert to a dictionary with date as the key
results_dict = {}
for row in rows:
key = row['created'].strftime('%Y-%m-%d %H:%M:%S')
stats = copy.copy(row)
del stats['created']
results_dict[row['created']] = stats
# Order the dictionary by date
results_sort = OrderedDict()
for key, value in sorted(results_dict.iteritems(), key=lambda t: t[0]):
results_sort[key] = value
# Return the ordered resuls
return results_sort
except Exception as e:
self.log.error('Failed to retrieve host statistics for \'%s\': %s' % (self.uuid, e))
return False
def create(self, params):
"""
Create a new host statistics row.
"""
# If any parameters are invalid
if self.pv == False:
self.log.error('Host UUID \'%s\' is invalid or is not a managed host' % self.uuid)
return False
# Require a row parameters dictionary
if not params or not isinstance(params, dict):
self.log.error('Missing required dictionary of column names/values')
return False
# Make sure all required
for key, data_type in self.columns.iteritems():
if key not in params:
self.log.error('Missing required column key \'%s\'' % key)
return False
# Make sure the host table exists
table_status = self._table_init()
if table_status != True:
return False
# Construct the polling query
poll_query = self._construct_poll_query(params)
# Create the statistics row
try:
self.dbh.execute(poll_query)
return True
except Exception as e:
self.log.error('Failed to create statistics row for host \'%s\': ' % (self.uuid, e))
return False
def delete(self):
"""
Delete a host statistics table.
"""
# If any parameters are invalid
if self.pv == False:
return False
# Construct the drop table syntax
drop_query = 'DROP TABLE IF EXISTS `%s`.`%s`' % (self.db['NAME'], self.uuid)
# Drop the host statistics table
try:
self.dbh.execute(drop_query)
return True
except Exception as e:
self.log.error('Failed to delete host \'%s\' statistics table: %s' % (self.uuid, e))
return False
|
After his convincing victory over Conor McGregor at UFC 229, and the brawl, Khabib Nurmagomedov and his father Abdulmanap Nurmagomedov met Russian President Vladimir Putin.
Putin, a 6th degree black belt in Judo and master in Sambo and Karate, is a big supporter of Russian MMA.
“I will ask your father not to punish you too strictly, because you achieved the main task, worthily and convincingly,” said Putin to Khabib.
said Nurmagomedov at the post-fight press conference.
|
#
import math, re
#
def get_random_airport (openTrepLibrary, travelPB2):
# OpenTREP
result = openTrepLibrary.getRandom()
# Protobuf
place = travelPB2.Place()
place.ParseFromString (result)
return place
#
def get_lat_lon (place):
return place.coord.latitude, place.coord.longitude
#
def get_lon_lat (place):
return place.coord.longitude, place.coord.latitude
#
def get_country_code (place):
return place.country_code.code
#
def get_country_name (place):
return place.country_name
#
def get_city_code (place):
return place.city_code.code
#
def get_city_names (place):
return place.city_name_utf, place.city_name_ascii
#
def get_airport_names (place, nice=False):
ugly_name_utf = place.name_utf
ugly_name_ascii = place.city_name_ascii
if not nice:
return ugly_name_utf, ugly_name_ascii
# "Ugly" names typically contain the country code (2 uppercase letters)
# followed by a truncated repetition of the POR name. For instance:
# - Cagnes Sur Mer FR Cagnes Sur M
# - Antibes FR Antibes
# We just truncate the name and keep the part before the country code.
nice_name_utf = re.sub (r"(.*)([ ])([A-Z]{2})([ ])(.*)", "\\1", ugly_name_utf)
nice_name_ascii = re.sub (r"(.*)([ ])([A-Z]{2})([ ])(.*)", "\\1",
ugly_name_ascii)
return nice_name_utf, nice_name_ascii
#
def get_continent_code (place):
return place.continent_code.code
#
def get_continent_name (place):
return place.continent_name
#
def get_country_and_continent (place):
return place.country_code.code, place.continent_code.code
#
def great_circle_distance (lat1, lon1, lat2, lon2, degrees=True):
if degrees:
lat1 = lat1/180.0*math.pi
lon1 = lon1/180.0*math.pi
lat2 = lat2/180.0*math.pi
lon2 = lon2/180.0*math.pi
diameter = 12742.0
lat_diff = (lat2-lat1) / 2.0
lat_diff_sin = math.sin (lat_diff)
lon_diff = (lon2-lon1) / 2.0
lon_diff_sin = math.sin (lon_diff)
lat_cos = math.cos (lat1) * math.cos (lat2)
proj_dist = lat_diff_sin**2.0 + lat_cos * lon_diff_sin**2.0
gcd = diameter * math.asin (math.sqrt (proj_dist))
return gcd
#
def get_distance_km (place1, place2):
lat1, lon1 = get_lat_lon (place1)
lat2, lon2 = get_lat_lon (place2)
dist = great_circle_distance (lat1, lon1, lat2, lon2)
return dist
#
def get_local_local_flight_duration_hr (place1, place2):
lat1, lon1 = get_lat_lon (place1)
lat2, lon2 = get_lat_lon (place2)
dist = great_circle_distance (lat1, lon1, lat2, lon2)
travel_hr = 0.5 + dist/800.0
time_diff_hr = (cit2['lon'] - cit1['lon']) / 15.0
return travel_hr + time_diff_hr, unknown
|
With our curriculum, we seek to create teachers and children who have high expectations; striving for excellence across all areas of the curriculum. As the National Curriculum moves towards Mastery and children are expected to increasingly apply their skills throughout all aspects of their learning, we continue to push for the delivery of cross-curricular teaching in order to meet the needs of our children in a variety of ways. We want learning to take place in a safe, purposeful and engaging environment but for our children to be so enthralled with their learning that it continues outside school too.
At Maltby Lilly Hall, we enhance the learning opportunities available to our students through well chosen and structured learning experiences both in and out of school. This can be seen in anything from the day-to-day lessons that our teachers deliver, visitors brought in to school, the continued provisioning for creative homework and the trips/residentials that we plan for our pupils. All of this ultimately serves one purpose, to engage our pupils in a rich curriculum.
All maintained primary schools in England are required to follow the National Curriculum set by the Department for Education. The National Curriculum sets out both programmes of study (the broad areas pupils should learn) and descriptions of the attainment levels they should reach.
From September 2014 the primary school curriculum underwent a complete overhaul.
Why the change? The main goal is to raise standards. Although the new curriculum is intended to be more challenging, the content is actually lighter than the current curriculum, focusing on essential core subject knowledge and skills such as essay writing and computer programming.
|
import re
import json
import gevent
import requests
from gevent.queue import Empty
# my module
from . import stand_task
from modules import database
class bilibili_spider(stand_task.task):
'a spider espeacially design for bilibili bangumi'
def __init__(self, aim=None):
'aim in stand of which bangumi you want to watch'
super().__init__()
self.id = 'laxect.bilibili_spider'
self.inbox = self.id
self.version = 1
self.mode = 'from_database'
self.aims = []
def _url(self, aim):
url = f'http://bangumi.bilibili.com/jsonp/seasoninfo/{aim}\
.ver?callback=seasonListCallback'
return url
def _handle(self, text, aim):
'the fun to handle the text spider return'
dica = json.loads(re.findall('\w*\((.*)\);', text)[0])
title = dica['result']['bangumi_title']
eps = dica['result']['episodes']
res = (
title,
eps[0]['index'],
eps[0]['index_title'],
eps[0]['webplay_url'])
fres = '%s 更新了第%s集 %s\n%s' % res # format string
with database.database(self.id) as db:
if db.check_up_to_date(aim, str(res)):
return fres
return None
def _aim_run(self, aim, res):
try:
ts = self._handle(requests.get(self._url(aim), timeout=5).text, aim)
except requests.exceptions.RequestException as err:
return
if ts:
res.append(ts)
def _run(self, targets):
if self.mode == 'from_inbox':
aims = self.aims
else:
aims = targets
res = []
pool = []
for aim in aims:
if aim:
pool.append(gevent.spawn(self._aim_run, aim, res))
gevent.joinall(pool)
if self.debug:
msg = f'the res of run is:\n{str(res)}'
self.debug_information_format(msg)
return res
def _inbox_handle(self, inbox):
aims = []
try:
while True:
item = inbox.get(block=False)
if item:
aims = item['msg']
except Empty:
pass
if self.debug:
msg = f'the argv recv from inbox is:\n{str(aims)}'
self.debug_information_format(msg)
if aims:
self.aims = aims
self.mode = 'from_inbox'
def mod_init(aim):
return bilibili_spider(aim=aim)
|
BRIAN CAMPBELL ADAIR worked in companies SCOTS LAW TRUST LIMITED, THE SCOTTISH COUNCIL OF LAW REPORTING as SOLICITOR.
BRIAN CAMPBELL ADAIR was from 1992.10.30 to 1994.07.07 at SCOTS LAW TRUST LIMITED employed as Director (SOLICITOR).
BRIAN CAMPBELL ADAIR was from 1992.10.28 to 1993.05.28 at THE SCOTTISH COUNCIL OF LAW REPORTING employed as Director (SOLICITOR).
BRIAN CAMPBELL ADAIR Score for BRIAN CAMPBELL ADAIR is 5 stars.
|
#
# Python GEDCOM Parser
#
# This is a basic parser for the GEDCOM 5.5 format. For documentation of
# this format, see
#
# http://homepages.rootsweb.com/~pmcbride/gedcom/55gctoc.htm
# Copyright (C) 2012 Daniel Zappala (daniel.zappala [at] gmail.com)
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
__all__ = ["Gedcom", "Element", "GedcomParseError"]
# Global imports
import string
class Gedcom:
"""Gedcom parser
This parser reads a GEDCOM file and parses it into a set of
elements. These elements can be accessed via a list (the order of
the list is the same as the order of the elements in the GEDCOM
file), or a dictionary (the key to the dictionary is a unique
identifier that one element can use to point to another element).
"""
def __init__(self,file):
"""Initialize a Gedcom parser. You must supply a Gedcom file."""
self.__element_list = []
self.__element_dict = {}
self.__element_top = Element(-1,"","TOP","",self.__element_dict)
self.__current_level = -1
self.__current_element = self.__element_top
self.__individuals = 0
self.__parse(file)
def element_list(self):
"""Return a list of all the elements in the Gedcom file. The
elements are in the same order as they appeared in the file.
"""
return self.__element_list
def element_dict(self):
"""Return a dictionary of elements from the Gedcom file. Only
elements identified by a pointer are listed in the dictionary.
The key for the dictionary is the pointer.
"""
return self.__element_dict
# Private methods
def __parse(self,file):
# open file
# go through the lines
f = open(file)
number = 1
for line in f.readlines():
# Skip over some junk that Rootsmagic puts in gedcom files.
if number == 1 and ord(line[0]) == 239:
line = line[3:]
self.__parse_line(number,line)
number += 1
self.__count()
def __parse_line(self,number,line):
# each line should have: Level SP (Pointer SP)? Tag (SP Value)? (SP)? NL
# parse the line
parts = string.split(line)
place = 0
l = self.__level(number,parts,place)
place += 1
p = self.__pointer(number,parts,place)
if p != '':
place += 1
t = self.__tag(number,parts,place)
place += 1
v = self.__value(number,parts,place)
# create the element
if l > self.__current_level + 1:
self.__error(number,"Structure of GEDCOM file is corrupted")
e = Element(l,p,t,v,self.element_dict())
self.__element_list.append(e)
if p != '':
self.__element_dict[p] = e
if l > self.__current_level:
self.__current_element.add_child(e)
e.add_parent(self.__current_element)
else:
# l.value <= self.__current_level:
while (self.__current_element.level() != l - 1):
self.__current_element = self.__current_element.parent()
self.__current_element.add_child(e)
e.add_parent(self.__current_element)
# finish up
self.__current_level = l
self.__current_element = e
def __level(self,number,parts,place):
if len(parts) <= place:
self.__error(number,"Empty line")
try:
l = int(parts[place])
except ValueError:
self.__error(number,"Line must start with an integer level")
if (l < 0):
self.__error(number,"Line must start with a positive integer")
return l
def __pointer(self,number,parts,place):
if len(parts) <= place:
self.__error(number,"Incomplete Line")
p = ''
part = parts[1]
if part[0] == '@':
if part[len(part)-1] == '@':
p = part
# could strip the pointer to remove the @ with
# string.strip(part,'@')
# but it may be useful to identify pointers outside this class
else:
self.__error(number,"Pointer element must start and end with @")
return p
def __tag(self,number,parts,place):
if len(parts) <= place:
self.__error(number,"Incomplete line")
return parts[place]
def __value(self,number,parts,place):
if len(parts) <= place:
return ''
p = self.__pointer(number,parts,place)
if p != '':
# rest of the line should be empty
if len(parts) > place + 1:
self.__error(number,"Too many elements")
return p
else:
# rest of the line should be ours
vlist = []
while place < len(parts):
vlist.append(parts[place])
place += 1
v = string.join(vlist)
return v
def __error(self,number,text):
error = "Gedcom format error on line " + str(number) + ': ' + text
raise GedcomParseError, error
def __count(self):
# Count number of individuals
self.__individuals = 0
for e in self.__element_list:
if e.individual():
self.__individuals += 1
def __print(self):
for e in self.element_list:
print string.join([str(e.level()),e.pointer(),e.tag(),e.value()])
class GedcomParseError(Exception):
"""Exception raised when a Gedcom parsing error occurs."""
def __init__(self, value):
self.value = value
def __str__(self):
return `self.value`
class Element:
"""Gedcom element
Each line in a Gedcom file is an element with the format
level [pointer] tag [value]
where level and tag are required, and pointer and value are
optional. Elements are arranged hierarchically according to their
level, and elements with a level of zero are at the top level.
Elements with a level greater than zero are children of their
parent.
A pointer has the format @pname@, where pname is any sequence of
characters and numbers. The pointer identifies the object being
pointed to, so that any pointer included as the value of any
element points back to the original object. For example, an
element may have a FAMS tag whose value is @F1@, meaning that this
element points to the family record in which the associated person
is a spouse. Likewise, an element with a tag of FAMC has a value
that points to a family record in which the associated person is a
child.
See a Gedcom file for examples of tags and their values.
"""
def __init__(self,level,pointer,tag,value,dict):
"""Initialize an element. You must include a level, pointer,
tag, value, and global element dictionary. Normally
initialized by the Gedcom parser, not by a user.
"""
# basic element info
self.__level = level
self.__pointer = pointer
self.__tag = tag
self.__value = value
self.__dict = dict
# structuring
self.__children = []
self.__parent = None
def level(self):
"""Return the level of this element."""
return self.__level
def pointer(self):
"""Return the pointer of this element."""
return self.__pointer
def tag(self):
"""Return the tag of this element."""
return self.__tag
def value(self):
"""Return the value of this element."""
return self.__value
def children(self):
"""Return the child elements of this element."""
return self.__children
def parent(self):
"""Return the parent element of this element."""
return self.__parent
def add_child(self,element):
"""Add a child element to this element."""
self.children().append(element)
def add_parent(self,element):
"""Add a parent element to this element."""
self.__parent = element
def individual(self):
"""Check if this element is an individual."""
return self.tag() == "INDI"
# criteria matching
def criteria_match(self,criteria):
"""Check in this element matches all of the given criteria.
The criteria is a colon-separated list, where each item in the
list has the form [name]=[value]. The following criteria are supported:
surname=[name]
Match a person with [name] in any part of the surname.
name=[name]
Match a person with [name] in any part of the given name.
birth=[year]
Match a person whose birth year is a four-digit [year].
birthrange=[year1-year2]
Match a person whose birth year is in the range of years from
[year1] to [year2], including both [year1] and [year2].
death=[year]
deathrange=[year1-year2]
marriage=[year]
marriagerange=[year1-year2]
"""
# error checking on the criteria
try:
for crit in criteria.split(':'):
key,value = crit.split('=')
except:
return False
match = True
for crit in criteria.split(':'):
key,value = crit.split('=')
if key == "surname" and not self.surname_match(value):
match = False
elif key == "name" and not self.given_match(value):
match = False
elif key == "birth":
try:
year = int(value)
if not self.birth_year_match(year):
match = False
except:
match = False
elif key == "birthrange":
try:
year1,year2 = value.split('-')
year1 = int(year1)
year2 = int(year2)
if not self.birth_range_match(year1,year2):
match = False
except:
match = False
elif key == "death":
try:
year = int(value)
if not self.death_year_match(year):
match = False
except:
match = False
elif key == "deathrange":
try:
year1,year2 = value.split('-')
year1 = int(year1)
year2 = int(year2)
if not self.death_range_match(year1,year2):
match = False
except:
match = False
elif key == "marriage":
try:
year = int(value)
if not self.marriage_year_match(year):
match = False
except:
match = False
elif key == "marriagerange":
try:
year1,year2 = value.split('-')
year1 = int(year1)
year2 = int(year2)
if not self.marriage_range_match(year1,year2):
match = False
except:
match = False
return match
def surname_match(self,name):
"""Match a string with the surname of an individual."""
(first,last) = self.name()
return last.find(name) >= 0
def given_match(self,name):
"""Match a string with the given names of an individual."""
(first,last) = self.name()
return first.find(name) >= 0
def birth_year_match(self,year):
"""Match the birth year of an individual. Year is an integer."""
return self.birth_year() == year
def birth_range_match(self,year1,year2):
"""Check if the birth year of an individual is in a given
range. Years are integers.
"""
year = self.birth_year()
if year >= year1 and year <= year2:
return True
return False
def death_year_match(self,year):
"""Match the death year of an individual. Year is an integer."""
return self.death_year() == year
def death_range_match(self,year1,year2):
"""Check if the death year of an individual is in a given range.
Years are integers.
"""
year = self.death_year()
if year >= year1 and year <= year2:
return True
return False
def marriage_year_match(self,year):
"""Check if one of the marriage years of an individual matches
the supplied year. Year is an integer.
"""
years = self.marriage_years()
return year in years
def marriage_range_match(self,year1,year2):
"""Check if one of the marriage year of an individual is in a
given range. Years are integers.
"""
years = self.marriage_years()
for year in years:
if year >= year1 and year <= year2:
return True
return False
def families(self):
"""Return a list of all of the family elements of a person."""
results = []
for e in self.children():
if e.tag() == "FAMS":
f = self.__dict.get(e.value(),None)
if f != None:
results.append(f)
return results
def name(self):
"""Return a person's names as a tuple: (first,last)."""
first = ""
last = ""
if not self.individual():
return (first,last)
for e in self.children():
if e.tag() == "NAME":
# some older Gedcom files don't use child tags but instead
# place the name in the value of the NAME tag
if e.value() != "":
name = string.split(e.value(),'/')
first = string.strip(name[0])
last = string.strip(name[1])
else:
for c in e.children():
if c.tag() == "GIVN":
first = c.value()
if c.tag() == "SURN":
last = c.value()
return (first,last)
def birth(self):
"""Return the birth tuple of a person as (date,place)."""
date = ""
place = ""
if not self.individual():
return (date,place)
for e in self.children():
if e.tag() == "BIRT":
for c in e.children():
if c.tag() == "DATE":
date = c.value()
if c.tag() == "PLAC":
place = c.value()
return (date,place)
def birth_year(self):
"""Return the birth year of a person in integer format."""
date = ""
if not self.individual():
return date
for e in self.children():
if e.tag() == "BIRT":
for c in e.children():
if c.tag() == "DATE":
datel = string.split(c.value())
date = datel[len(datel)-1]
if date == "":
return -1
try:
return int(date)
except:
return -1
def death(self):
"""Return the death tuple of a person as (date,place)."""
date = ""
place = ""
if not self.individual():
return (date,place)
for e in self.children():
if e.tag() == "DEAT":
for c in e.children():
if c.tag() == "DATE":
date = c.value()
if c.tag() == "PLAC":
place = c.value()
return (date,place)
def death_year(self):
"""Return the death year of a person in integer format."""
date = ""
if not self.individual():
return date
for e in self.children():
if e.tag() == "DEAT":
for c in e.children():
if c.tag() == "DATE":
datel = string.split(c.value())
date = datel[len(datel)-1]
if date == "":
return -1
try:
return int(date)
except:
return -1
def deceased(self):
"""Check if a person is deceased."""
if not self.individual():
return False
for e in self.children():
if e.tag() == "DEAT":
return True
return False
def marriage(self):
"""Return a list of marriage tuples for a person, each listing
(date,place).
"""
date = ""
place = ""
if not self.individual():
return (date,place)
for e in self.children():
if e.tag() == "FAMS":
f = self.__dict.get(e.value(),None)
if f == None:
return (date,place)
for g in f.children():
if g.tag() == "MARR":
for h in g.children():
if h.tag() == "DATE":
date = h.value()
if h.tag() == "PLAC":
place = h.value()
return (date,place)
def marriage_years(self):
"""Return a list of marriage years for a person, each in integer
format.
"""
dates = []
if not self.individual():
return dates
for e in self.children():
if e.tag() == "FAMS":
f = self.__dict.get(e.value(),None)
if f == None:
return dates
for g in f.children():
if g.tag() == "MARR":
for h in g.children():
if h.tag() == "DATE":
datel = string.split(h.value())
date = datel[len(datel)-1]
try:
dates.append(int(date))
except:
pass
return dates
def get_individual(self):
"""Return this element and all of its sub-elements."""
result = [self]
for e in self.children():
result.append(e)
return result
def get_family(self):
"""Return this element any all elements in its families."""
result = [self]
for e in self.children():
if e.tag() == "HUSB" or e.tag() == "WIFE" or e.tag() == "CHIL":
f = self.__dict.get(e.value())
if f != None:
result.append(f)
return result
def __str__(self):
"""Format this element as its original string."""
result = str(self.level())
if self.pointer() != "":
result += ' ' + self.pointer()
result += ' ' + self.tag()
if self.value() != "":
result += ' ' + self.value()
return result
|
Last edited by [EPIC] Tokaj; 01-26-2018, 04:57 AM.
There is also an issue which causes player to be stuck in the animation of reloading their weapons, specifically shotguns.
There is also a bug where when you emote or move with drift the bottom of his robe does not move.
And i don't know if it is fortnite or my playstation because when I hold down R2 it makes me look left.
Same here, Xbox too! This should easily be fixed.
- Can't identify squad member in spectate mode ( no names on their heads ) //this bug exist long ago.
- First shotgun ( pomp/tacticle) shot don't make damage when DBNO.
Dude, the first shotgun shot doesn't hit because of the protection. When you get knocked out, you have protection for like 1 second.
The DBNO thing isn't a bug. They are immune for a second after going down. Just don't shoot right away.
Is Autorun working for anyone? Doesn't work for me.
Do you have another controller to try? It might sound stupid but if the thumbstick calibration is off maybe it's pressing down slightly enough to stop the autorun? I've had issues with some controllers in the past where the sticks were off enough to mess with in-game menus but didn't seem to be affecting gameplay.
'Reset Building Choice' is bugged as it does not reset the building choice (it stays on the previous structure). I notice that this happens when switching from building to items in quick succession and this is essential for me when I'm in combat.
E.g. If the choice is on 'stairs' and I quickly cancel building, then enable building instantaneously (tapping the 'o' button twice on ps4 controller) it does not reset* to walls but instead stairs. Happens the same for any other structure. Trying to be as detailed here as possible and glad you guys are on the search for a fix.
Has an option in the first configuration guide that turns it off. Take a look at this.
Redefine the construction or something.
Not being able to disable founder chat.
Maybe its not fortine itself that crashes u but if you go in safety mode and maybe restart your ps4 or initalize your data but it will delete all your profiles not your game data you have to download your games again but you keep you game progress its in your cloud (if you have a ps4 + account.
I've been there, discovered that in the pve missions where the husks have the "healing wave" upon death - it doesn't matter how far away you are to the killed husk, you will hear the sound of an emote or something like it each time a husk dies. ????
There's also a bug that won't allow you to pick up items until like the 10th time holding down the key to pick it up.
Reloading glitch with shotgun and all the ramps, when trying to build, seem crooked or at an angle instead of aligned with what I’m building next/up to.
|
from django.contrib.auth.decorators import login_required
from django.utils.decorators import method_decorator
from django.views.generic.base import View
from django.shortcuts import render
from .models import Project, Ticket, Log
class LoginRequiredView(View):
'''
This view can be visited only by authenticated users.
'''
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
return super(LoginRequiredView, self).dispatch(*args, **kwargs)
class UserPrivateView(View):
'''
This view can be visited only by single user (view owner).
'''
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
if not self.request.user == self.get_object():
return render(self.request, 'access-denied.html')
return super(UserPrivateView, self).dispatch(*args, **kwargs)
class SuperUserView(View):
'''
This view can be visited only by superusers.
'''
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
if not self.request.user.is_superuser:
return render(self.request, 'access-denied.html')
return super(SuperUserView, self).dispatch(*args, **kwargs)
class ProjectReletedView(View):
url_pk_related_model = Project
project = None
def get_project(self):
'''
Based on self.url_pk_related_model get project instance and set it as self.project.
'''
if self.project:
# project is already available
return
model_instance = self.url_pk_related_model.objects.get(pk=self.kwargs['pk'])
if isinstance(model_instance, Project):
self.project = model_instance
elif isinstance(model_instance, Ticket):
self.project = model_instance.project
elif isinstance(model_instance, Log):
self.project = model_instance.ticket.project
else:
raise ValueError
def is_project_member(self):
self.get_project()
return self.request.user.is_superuser or self.request.user in self.project.members.all()
class ProjectView(ProjectReletedView):
'''
If project IS PRIVATE give access to:
- project members
- superusers
'''
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
self.get_project()
if self.project.is_private and not self.is_project_member():
return render(self.request, 'access-denied.html')
return super(ProjectView, self).dispatch(*args, **kwargs)
class MembersOnlyView(ProjectReletedView):
'''
This view can be visited only by:
- project members
- superusers
'''
@method_decorator(login_required)
def dispatch(self, *args, **kwargs):
if not self.is_project_member():
return render(self.request, 'access-denied.html')
return super(MembersOnlyView, self).dispatch(*args, **kwargs)
|
The parade is the opening of a day of children’s games and bingo sponsored by the Legion’s Junior Auxiliary. The day includes a vendor fair of work by local artisans, food, exhibits by high school cheer teams and an auction of goods and veteran-made items donated by businesses and other local folks. Each evening culminates with a live music concert; this year by country band 8 Seconds; Sheltered Reality (a youth drum corp); the Purdy River Band, and popular rock favorite, Two Buck Chuck.
The parade was held Saturday morning, with everyone in the surrounding area invited to participate. Floats, vintage cars and tractors, school groups, local businesses, area fire department vehicles and even a few live animals show up each year to march down Swisher’s main drag, while hundreds of kids scramble for tossed candy.
All of the proceeds from Swisher Fun Days activities are used to support American Legion Veterans programs, which includes all veterans, whether they are members or not, and children and youth programs.
Every dollar donated to American Legion programs is used for veterans’ programming, Palas noted. American Legion membership supports its own administrative costs so none are taken from donations.
“One of the missions of the American Legion family, which is made up of Legion veterans, Ladies Auxiliary and Sons of the American Legion, is to be involved in and assist their communities to move forward into the future,” Palas said.
|
#!/usr/bin/env python
# Script for measuring response time for redmine urls.
# It attempts to authenticate via html form which is it given on first GET.
# It should work generally for anything that forwards properly, and has
# "username" and "password" fields in the form.
#
# I tested it for CASino login and standard Redmine login
#
# Exmaple invocation:
# ./redmine.py -u tkarasek -b http://193.166.24.110:8080 \
# -l /rb/master_backlog/digile -c 5
import os
import sys
import getpass
import argparse
import mechanize
import cookielib
import logging
import time
import prettytable
logger = logging.getLogger("mechanize")
logger.addHandler(logging.StreamHandler(sys.stdout))
logger.setLevel(logging.DEBUG)
DESCRIPTION = "FORGE benchmark for services behind CAS"
REDMINE_URL = 'https://support.forgeservicelab.fi/redmine'
MEASURED_URLS = ['/rb/taskboards/50',
'/rb/master_backlog/digile']
def getUser():
user = os.environ.get('USER')
if user and user != 'root':
print "Using username: %s" % user
else:
user = raw_input('Give username: ')
return user
def getPassword(user):
dot_file = os.path.join(os.environ['HOME'], '.ldappass')
pw = None
if os.path.isfile(dot_file):
with open(dot_file) as f:
pw = f.read().strip()
print "Using password from %s" % dot_file
if not pw:
pw = getpass.getpass(
prompt="Give password for username %s: " % user)
return pw
def getAuthenticatedHandle(baseurl, cookiejar, user, password, debug=False):
br = mechanize.Browser()
if debug:
br.set_debug_http(True)
br.set_debug_responses(True)
br.set_debug_redirects(True)
br.set_cookiejar(cookiejar)
br.set_handle_equiv(True)
br.set_handle_redirect(True)
br.set_handle_referer(True)
br.set_handle_robots(False)
br.addheaders = [
('User-agent', ('Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.9.0.1) '
'Gecko/2008071615 Fedora/3.0.1-1.fc9 Firefox/3.0.1')),
('Accept', ('text/html,application/xhtml+xml,application/xml;q=0.9,'
'*/*;q=0.8'))
]
br.open(baseurl)
br.select_form(nr=0)
br.form['username'] = user
br.form['password'] = password
br.submit()
return br
def measureGet(browser, url):
start_time = time.time()
print "Getting %s .." % url
browser.open(url)
d = time.time() - start_time
print ".. took %.2f secs" % d
return d
def printResults(l):
x = prettytable.PrettyTable(['URL', 'avg time [sec]'])
for r in l:
x.add_row(r)
print x
if __name__ == '__main__':
parser = argparse.ArgumentParser(DESCRIPTION,
formatter_class=argparse.ArgumentDefaultsHelpFormatter)
help_count = 'how many times to measure timeout for each url'
help_baseurl = 'base url, e.g. https://auth.forgeservicelab.fi'
help_locations = ('locations to measure. full url is baseurl + locations. '
' e.g. /sessions')
parser.add_argument('-d','--debug', help='show debug output',
action='store_true')
parser.add_argument('-u','--user', help='user for CAS')
parser.add_argument('-t','--test', help='user for CAS',
action='store_true')
parser.add_argument('-b','--baseurl', help=help_baseurl,
default=REDMINE_URL)
parser.add_argument('-c','--count', help=help_count, default=2, type=int)
parser.add_argument('-l','--locations', help=help_locations, nargs='+',
default=MEASURED_URLS)
args = parser.parse_args()
if args.test:
printResults([['a', '1.3'], ['b', '1.5']])
sys.exit(0)
if args.user:
print "Using the username from args: %s" % args.user
user = args.user
else:
user = getUser()
password = getPassword(user)
cookiejar = cookielib.LWPCookieJar()
print ('Trying to authenticate via html form with given username and '
'password ..')
browser = getAuthenticatedHandle(args.baseurl, cookiejar, user, password,
args.debug)
print ".. authenticated"
res = []
for l in args.locations:
url = args.baseurl + l
tmp = []
for i in range(args.count):
d = measureGet(browser, url)
tmp.append(d)
res.append([url, "%.2f" % (sum(tmp) / float(len(tmp)))])
printResults(res)
|
China’s most talked-about bubble tea brand—The Alley—is finally coming to Singapore!
You would have seen its logo before, we’re sure. It is that very catchy antelope logo printed on an otherwise ubiquitous plastic cup. The Alley has been making waves in the tea scene since its launch in China and has expanded rapidly within Asia, including Malaysia and Thailand.
Just a few months back when The Alley announced its launch in Malaysia, it drove the nation mad and lots of Singaporeans even crossed the boarders to have a taste of the famous boba milk tea.
Now that they are hitting our shores, you do not have to travel that far. Here’s what you can expect from The Alley Singapore’s opening in April 2019.
More than just another Korean restaurant dishing out your typical Korean stews, fried chicken and soju, NY Night Market impresses with its unique, over-the-top renditions of popular Korean delicacies. No, this is far more than just another joint for you to drown out your post-Seoul vacation blues.
Hailing from Seoul, this restaurant brings to Singapore a taste of new-age, contemporary cuisine fusing the best of both Western and Korean influences. Imagine this: the vibrant cuisine of cosmopolitan New York City intermingling with the nuanced, hearty flavours of Korea. The result? A smorgasbord of mouthwatering delights that are just too decadent to handle—on your own at least. Nearly everything on the menu comes laden with melted cheese, cream or bacon which is never a bad idea—unless you’re dieting.
There are so many items to choose from, each with its own unique quirks and delicious appeal, but if we had to pick a favourite, we’d easily go for their M.A.C Feat Bacon (a twist on the classic mac & cheese) as well as their unique take on a Korean classic, Budaejjigae—a dish that sees a heaping mound of bulgogi beef and a jumbo pork sausage on top of the usual trimmings.
We hope you brought along your appetite because NY Night Market is anything but light, what you can expect is a full-on feast of next-level fusion food—one that perfectly fuses the best of Korean comfort food and New York City-style attitude.
|
from __future__ import absolute_import, division, print_function
import os
from bag8.project import Project
from bag8.yaml import Yaml
CURR_DIR = os.path.realpath('.')
def test_data():
# normal
project = Project('busybox')
assert Yaml(project).data == {
'busybox': {
'dockerfile': os.path.join(project.build_path, 'Dockerfile'),
'environment': {
'BAG8_LINKS': 'link',
'DNSDOCK_ALIAS': 'busybox.docker',
'DNSDOCK_IMAGE': '',
'DUMMY': 'nothing here',
'NGINX_UPSTREAM_SERVER_DOMAIN': 'link.docker',
},
'image': 'bag8/busybox',
'links': [
'link:link'
]
},
'link': {
'environment': {
'BAG8_LINKS': '',
'DNSDOCK_ALIAS': 'link.docker',
'DNSDOCK_IMAGE': '',
'DUMMY': 'nothing here too'
},
'expose': [1234],
'image': 'bag8/link'
}
}
# develop
project = Project('busybox', develop=True)
assert Yaml(project).data == {
'busybox': {
'dockerfile': os.path.join(project.build_path, 'Dockerfile'),
'environment': {
'BAG8_LINKS': 'link',
'DNSDOCK_ALIAS': 'busybox.docker',
'DNSDOCK_IMAGE': '',
'DUMMY': 'yo',
'NGINX_UPSTREAM_SERVER_DOMAIN': 'link.docker',
},
'image': 'bag8/busybox',
'links': [
'link:link'
],
'volumes': [
'{}:/tmp'.format(CURR_DIR)
]
},
'link': {
'environment': {
'BAG8_LINKS': '',
'DNSDOCK_ALIAS': 'link.docker',
'DNSDOCK_IMAGE': '',
'DUMMY': 'nothing here too'
},
'expose': [1234],
'image': 'bag8/link'
}
}
def test_service_dicts():
# normal
project = Project('busybox')
assert sorted(Yaml(project).service_dicts) == sorted([
{
'name': 'busybox',
'bag8_name': 'busybox',
'dockerfile': os.path.join(project.build_path, 'Dockerfile'),
'environment': {
'BAG8_LINKS': 'link',
'DNSDOCK_ALIAS': 'busybox.docker',
'DNSDOCK_IMAGE': '',
'DUMMY': 'nothing here',
'NGINX_UPSTREAM_SERVER_DOMAIN': 'link.docker',
},
'image': 'bag8/busybox',
'links': [
'link:link'
]
},
{
'name': 'link',
'bag8_name': 'link',
'environment': {
'BAG8_LINKS': '',
'DNSDOCK_ALIAS': 'link.docker',
'DNSDOCK_IMAGE': '',
'DUMMY': 'nothing here too'
},
'expose': [1234],
'image': 'bag8/link'
}
])
# develop
project = Project('busybox', develop=True)
assert sorted(Yaml(project).service_dicts) == sorted([
{
'name': 'busybox',
'bag8_name': 'busybox',
'dockerfile': os.path.join(project.build_path, 'Dockerfile'),
'environment': {
'BAG8_LINKS': 'link',
'DNSDOCK_ALIAS': 'busybox.docker',
'DNSDOCK_IMAGE': '',
'DUMMY': 'yo',
'NGINX_UPSTREAM_SERVER_DOMAIN': 'link.docker',
},
'image': 'bag8/busybox',
'links': [
'link:link'
],
'volumes': [
'{}:/tmp'.format(CURR_DIR)
]
},
{
'name': 'link',
'bag8_name': 'link',
'environment': {
'BAG8_LINKS': '',
'DNSDOCK_ALIAS': 'link.docker',
'DNSDOCK_IMAGE': '',
'DUMMY': 'nothing here too'
},
'expose': [1234],
'image': 'bag8/link'
}
])
# complex name
project = Project('link.2')
assert sorted(Yaml(project).service_dicts) == sorted([
{
'name': 'link2',
'bag8_name': 'link.2',
'dockerfile': os.path.join(project.build_path, 'Dockerfile'),
'environment': {
'BAG8_LINKS': '',
'DNSDOCK_ALIAS': 'link2.docker',
'DNSDOCK_IMAGE': '',
},
'image': 'bag8/busybox',
'links': []
}
])
|
That had to be all caps because it’s remarkable. Ever since moving into this house 13 years ago, our dining room didn’t have sun in it. Until now. We are nearly done with the remodel so last night we moved the dining room table out of the living room and back into the dining room. And this afternoon… Afternoon, mind you!!! … I sat at the table to plan an impromptu Wine & Cheese tasting party. IN. THE. SUN. Honestly, I had trouble concentrating on my notes because of the sun. Glorious sun!
There are no windows on the west side of our house and the roof has a big overhang so our home gets very little direct sun in it. The solution was two arch windows extending as high as possible in the west wall. I’ll make a blog post with more details soon.
To anyone else, that photo would be completely unremarkable. But I’m an artist, photographer and lover of sunshine. To not have sun is like Michelangelo having no paint. There are no accessories or furniture in the world as glorious as sun. I can’t believe I actually get to live here!
I hope I never get to the point of taking something so basic for granted. I basked in sun today.
|
# Copyright (C) 2011 Luke Macken <lmacken@redhat.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Affero General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Affero General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
from pyramid.httpexceptions import HTTPFound
from tw2.jqplugins.ui.base import set_ui_theme_name
from widgets import LeafyGraph
from widgets import LeafyDialog
from widgets import LeafySearchbar
import simplejson
import webob
def view_root(context, request):
return HTTPFound(location='/1')
def view_model(context, request):
# TODO -- we need a fedora jquery-ui theme sitting around.
set_ui_theme_name('hot-sneaks')
return {'item':context, 'project':'leafymiracle',
'jitwidget': LeafyGraph(rootObject=context),
'dialogwidget': LeafyDialog,
'searchbarwidget': LeafySearchbar,
}
def view_search(context, request):
term = request.params['term']
cats = request.params.get('cats', 'Category,Group,Package')
data = context.search(term, cats)
resp = webob.Response(request=request, content_type="application/json")
resp.body = simplejson.dumps(data)
return resp
|
We are organising an International Conference on Advances in Materials Science & Applied Biology (AMSAB) on Jan 8, 9 and 10th, 2019, in Mumbai. We request you to promote the conference through your website, email alerts, etc.
SVKM’s NMIMS (Deemed to be University), is one of the premier top private University in India (Grade A+ accreditation by NAAC). It has also emerged as a multi-disciplinary University with 6 campuses at Mumbai, Shirpur, Bengaluru, Hyderabad, Indore and Navi Mumbai and 12 constituent schools that include Management, Engineering, Pharmacy, Architecture, Commerce, Business Economics, Science, Law, Aviation, Liberal Arts, Design & Continuing Education. Today more than 15000 students and over 600 full time faculty members are part of India's most sought after academic community i.e. NMIMS.
International Conference on Advances in Materials Science & Applied Biology (AMSAB) is an interdisciplinary forum for researchers working in the areas of material sciences and applied biology. The conference is aimed to disseminate knowledge about the recent advances made in these areas.
AMSAB will provide an opportunity to researchers to present their latest research in their respective areas and devote three days amidst experienced academicians, industry researchers and peers, in scientifically motivating environment to exchange experiences and knowledge. The conference also aims to discuss the new approaches and application methods, besides the usual research strategy and industrial applications in the below mentioned themes. It also envisages global networking to inspire innovative research to provide solutions and drive systemic change. Further, eminent scientists across the globe will share their research and future scope for betterment of mankind.
Our target participants are Scientists, post-docs, research scholars and academicians/faculties.
Also, the brochure of the conference is attached herewith.
Kindly note that the last date of submission of abstract is 31st July 2018.
|
from miro.test.framework import MiroTestCase
from miro.frontends.widgets.widgetstatestore import WidgetStateStore
from miro.frontends.widgets.itemlist import SORT_KEY_MAP
class WidgetStateConstants(MiroTestCase):
def setUp(self):
MiroTestCase.setUp(self)
self.display_types = set(WidgetStateStore.get_display_types())
def test_view_types(self):
# test that all view types are different
view_types = (WidgetStateStore.get_list_view_type(),
WidgetStateStore.get_standard_view_type(),
WidgetStateStore.get_album_view_type())
for i in range(len(view_types)):
for j in range(i + 1, len(view_types)):
self.assertNotEqual(view_types[i], view_types[j])
def test_default_view_types(self):
display_types = set(WidgetStateStore.DEFAULT_VIEW_TYPE)
self.assertEqual(self.display_types, display_types)
def test_default_column_widths(self):
# test that all available columns have widths set for them
# calculate all columns that available for some display/view
# combination
available_columns = set()
display_id = None # this isn't used yet, just set it to a dummy value
for display_type in self.display_types:
for view_type in (WidgetStateStore.get_list_view_type(),
WidgetStateStore.get_standard_view_type(),
WidgetStateStore.get_album_view_type()):
available_columns.update(
WidgetStateStore.get_columns_available(
display_type, display_id, view_type))
# make sure that we have widths for those columns
self.assertEqual(available_columns,
set(WidgetStateStore.DEFAULT_COLUMN_WIDTHS.keys()))
def test_default_sort_column(self):
display_types = set(WidgetStateStore.DEFAULT_SORT_COLUMN)
self.assertEqual(self.display_types, display_types)
def test_default_columns(self):
display_types = set(WidgetStateStore.DEFAULT_COLUMNS)
self.assertEqual(self.display_types, display_types)
def test_available_columns(self):
# Currently what get_display_types() uses. Testing it anyway.
display_types = set(WidgetStateStore.AVAILABLE_COLUMNS)
self.assertEqual(self.display_types, display_types)
def test_sort_key_map(self):
columns = set(WidgetStateStore.DEFAULT_COLUMN_WIDTHS)
sort_keys = set(SORT_KEY_MAP)
self.assertEqual(sort_keys, columns)
|
database. I have been getting multiple copies of people's e-mails.
of the user group or for code tweaks?
for 2 lines relating to mySQL.
|
# -*- coding: utf-8 -*-
from __future__ import unicode_literals
from datetime import datetime, date, datetime, timedelta
from dateutil.relativedelta import relativedelta
from django.apps import apps
from django.core.exceptions import ObjectDoesNotExist
from django.http import Http404
from rest_framework.response import Response
from rest_framework.generics import GenericAPIView
from ..utils.usage_statistics import get_usage_statistics
class UsageStatisticsView(GenericAPIView):
def get_object(self, obj_ct, obj_uuid):
try:
obj = apps.get_model(*obj_ct.split(".")).objects.get(uuid=obj_uuid)
return obj
except ObjectDoesNotExist:
raise Http404
def get(self, request, obj_ct, obj_uuid):
obj = self.get_object(obj_ct, obj_uuid)
# for the moment the range defaults to the last 12 months (including current)
today = datetime.now()
end = date(today.year + today.month // 12, today.month % 12 + 1, 1) - timedelta(1)
start = end - relativedelta(years=1, days=-1)
usage_statistics = get_usage_statistics(obj=obj, start=start, end=end)
return Response(usage_statistics)
# return super(UsageStatisticsView, self).list(request, *args, **kwargs)
|
Posted in Accidents by Mikhail Voytenko on Jan 21, 2016 at 07:44.
Passenger cargo vessel SABUK NUSANTARA 49 reported fire on board at 2340 LT Jan 20, off Kupang, West Timor, in Savu Sea. vessel with 81 passengers, 19 crew and cargo on board left Kupang, bound for Naikliu Kupang and to North Maluku islands. all passengers were evacuated and transferred to Tenau port, Kupang. Fire according to preliminary reports, started in passenger luggage and spread on passenger deck portside. No injures reported, fire understood to be extinguished.
|
"""
Manage your Anaconda repository channels.
"""
from __future__ import unicode_literals, print_function
from binstar_client.utils import get_server_api
import functools
import logging
import argparse
logger = logging.getLogger('binstar.channel')
def main(args, name, deprecated=False):
aserver_api = get_server_api(args.token, args.site)
if args.organization:
owner = args.organization
else:
current_user = aserver_api.user()
owner = current_user['login']
if deprecated:
logger.warning('channel command is deprecated in favor of label')
if args.copy:
aserver_api.copy_channel(args.copy[0], owner, args.copy[1])
logger.info("Copied {} {} to {}".format(name, *tuple(args.copy)))
elif args.remove:
aserver_api.remove_channel(args.remove, owner)
logger.info("Removed {} {}".format(name, args.remove))
elif args.list:
logger.info('{}s'.format(name.title()))
for channel, info in aserver_api.list_channels(owner).items():
if isinstance(info, int): # OLD API
logger.info((' + %s ' % channel))
else:
logger.info((' + %s ' % channel) + ('[locked]' if info['is_locked'] else ''))
elif args.show:
info = aserver_api.show_channel(args.show, owner)
logger.info('{} {} {}'.format(
name.title(),
args.show,
('[locked]' if info['is_locked'] else '')
))
for f in info['files']:
logger.info(' + %(full_name)s' % f)
elif args.lock:
aserver_api.lock_channel(args.lock, owner)
logger.info("{} {} is now locked".format(name.title(), args.lock))
elif args.unlock:
aserver_api.unlock_channel(args.unlock, owner)
logger.info("{} {} is now unlocked".format(name.title(), args.unlock))
else:
raise NotImplementedError()
def _add_parser(subparsers, name, deprecated=False):
deprecated_warn = ""
if deprecated:
deprecated_warn = "[DEPRECATED in favor of label] \n"
subparser = subparsers.add_parser(
name,
help='{}Manage your Anaconda repository {}s'.format(deprecated_warn, name),
formatter_class=argparse.RawDescriptionHelpFormatter,
description=__doc__)
subparser.add_argument('-o', '--organization',
help="Manage an organizations {}s".format(name))
group = subparser.add_mutually_exclusive_group(required=True)
group.add_argument('--copy', nargs=2, metavar=name.upper())
group.add_argument(
'--list',
action='store_true',
help="{}list all {}s for a user".format(deprecated_warn, name)
)
group.add_argument(
'--show',
metavar=name.upper(),
help="{}Show all of the files in a {}".format(deprecated_warn, name)
)
group.add_argument(
'--lock',
metavar=name.upper(),
help="{}Lock a {}".format(deprecated_warn, name))
group.add_argument(
'--unlock',
metavar=name.upper(),
help="{}Unlock a {}".format(deprecated_warn, name)
)
group.add_argument(
'--remove',
metavar=name.upper(),
help="{}Remove a {}".format(deprecated_warn, name)
)
subparser.set_defaults(main=functools.partial(main, name=name, deprecated=deprecated))
def add_parser(subparsers):
_add_parser(subparsers, name="label")
_add_parser(subparsers, name="channel", deprecated=True)
|
Published 04/23/2019 06:33:00 pm at 04/23/2019 06:33:00 pm in Interactive Room Design.
interactive room design inspire your customers with interactive room planner smt surface inspire your customers with interactive room planner.
arch interactive room thisara sandaruwan portfolio architectural interactive room, news immersive tech escape room experiences escape room design spotlight mountain time escape rooms, ikigai the premium interactive sensory room , virtual living room planner living room design interactive room virtual , kitchen design tool free interactive room website lg virtual best kitchen wall design gallery from interactive tool interactive kitchen design , interactive room paintings virtual painting room application room design paint swatches, living room designer tool trasher living room designer tool living room design interior interactive room design online living room design tool, inspire your customers with interactive room planner smt surface inspire your customers with interactive room planner, interactive room boomerang centre bury interactiveroom, interactive room the living room network ten , virtual room designer design your room in d living spaces before and after d designs.
|
import os
target_arch = 'arm64'
android_api_level = os.getenv('ANDROID_VER')
# Python optional modules.
# Available:
# tinycc - Tiny cc compiler
# bzip2 - enable the bz2 module and the bzip2 codec
# xz - enable the lzma module and the lzma codec
# openssl - enable the ssl module and SSL/TLS support for sockets
# readline - enable the readline module and command history/the like in the REPL
# ncurses - enable the curses module
# sqlite - enable the sqlite3 module
# gdbm - enable the dbm/gdbm modules
# libffi - enable the ctypes module
# zlib - enable the zlib module
# expat - enable the pyexpat module
# tools - some handy utility scripts from ./devscripts
packages = ('openssl', 'ncurses', 'readline', 'bzip2', 'xz', 'zlib', 'sqlite', 'gdbm', 'libffi', 'expat', 'tools')
# 3rd Python modules.
py_packages = ('pycryptodome', ) #'openblas', 'numpy', 'scipy', 'pandas', 'kiwisolver', 'matplotlib', 'theano', 'scikitlearn')
#py_packages = ('libzmq','pyzmq')
py_packages2 = ('scikitlearn2',) #'libzmq','pyzmq2', 'numpy2','scipy2','pandas2','matplotlib2','kiwisolver2','theano2','pillow2','dropbear','dulwich2'
#'pyjnius2','android2','pygamesdl2','kivy2','libxml2','libxslt','lxml2','cryptography2'
#'pyopenssl2'
skip_build_py = os.path.exists(".skip_build_py")
skip_build_py2 = os.path.exists(".skip_build_py2")
skip_build_py_module = os.path.exists(".skip_build_py_module")
skip_build_py2_module = os.path.exists(".skip_build_py2_module")
use_bintray = False
bintray_username = 'qpython-android'
bintray_repo = 'qpython3-core'
|
Twitter doesn’t satisfy Kareena Kapoor. While all the Bollywood actors are busy joining the famous social networking site Twitter, our lovely lady Kareena, probably aspires something bigger and exclusive.
This lady has launched her official website (www.kareenakapoor.me) where she hopes to share tidbits of her daily life with her fans.
“This is my own site, my own space. I like the idea of leaving Post-it notes for my fans. I have created a special fan book where fans can write to me directly,” Kareena said in a statement.
On her website, Kareena also declared that she’s delighted and privileged to be part of the global 1 GOAL campaign which aims to support education for all.
“1 GOAL campaign will reach a number of people who are not familiar with Indian cinema. This site will be a window to India and Indian cinema,” said the 29-year-old actress.
|
from google.appengine.ext import ndb
from consts.district_type import DistrictType
from database.dict_converters.district_converter import DistrictConverter
from database.dict_converters.team_converter import TeamConverter
from database.database_query import DatabaseQuery
from models.district import District
from models.district_team import DistrictTeam
from models.event import Event
from models.event_team import EventTeam
from models.team import Team
class TeamQuery(DatabaseQuery):
CACHE_VERSION = 2
CACHE_KEY_FORMAT = 'team_{}' # (team_key)
DICT_CONVERTER = TeamConverter
@ndb.tasklet
def _query_async(self):
team_key = self._query_args[0]
team = yield Team.get_by_id_async(team_key)
raise ndb.Return(team)
class TeamListQuery(DatabaseQuery):
CACHE_VERSION = 2
CACHE_KEY_FORMAT = 'team_list_{}' # (page_num)
PAGE_SIZE = 500
DICT_CONVERTER = TeamConverter
@ndb.tasklet
def _query_async(self):
page_num = self._query_args[0]
start = self.PAGE_SIZE * page_num
end = start + self.PAGE_SIZE
teams = yield Team.query(Team.team_number >= start, Team.team_number < end).fetch_async()
raise ndb.Return(teams)
class TeamListYearQuery(DatabaseQuery):
CACHE_VERSION = 2
CACHE_KEY_FORMAT = 'team_list_year_{}_{}' # (year, page_num)
DICT_CONVERTER = TeamConverter
@ndb.tasklet
def _query_async(self):
year = self._query_args[0]
page_num = self._query_args[1]
event_team_keys_future = EventTeam.query(EventTeam.year == year).fetch_async(keys_only=True)
teams_future = TeamListQuery(page_num).fetch_async()
year_team_keys = set()
for et_key in event_team_keys_future.get_result():
team_key = et_key.id().split('_')[1]
year_team_keys.add(team_key)
teams = filter(lambda team: team.key.id() in year_team_keys, teams_future.get_result())
raise ndb.Return(teams)
class DistrictTeamsQuery(DatabaseQuery):
CACHE_VERSION = 3
CACHE_KEY_FORMAT = 'district_teams_{}' # (district_key)
DICT_CONVERTER = TeamConverter
@ndb.tasklet
def _query_async(self):
district_key = self._query_args[0]
district_teams = yield DistrictTeam.query(
DistrictTeam.district_key == ndb.Key(District, district_key)).fetch_async()
team_keys = map(lambda district_team: district_team.team, district_teams)
teams = yield ndb.get_multi_async(team_keys)
raise ndb.Return(teams)
class EventTeamsQuery(DatabaseQuery):
CACHE_VERSION = 2
CACHE_KEY_FORMAT = 'event_teams_{}' # (event_key)
DICT_CONVERTER = TeamConverter
@ndb.tasklet
def _query_async(self):
event_key = self._query_args[0]
event_teams = yield EventTeam.query(EventTeam.event == ndb.Key(Event, event_key)).fetch_async()
team_keys = map(lambda event_team: event_team.team, event_teams)
teams = yield ndb.get_multi_async(team_keys)
raise ndb.Return(teams)
class EventEventTeamsQuery(DatabaseQuery):
CACHE_VERSION = 2
CACHE_KEY_FORMAT = 'event_eventteams_{}' # (event_key)
@ndb.tasklet
def _query_async(self):
event_key = self._query_args[0]
event_teams = yield EventTeam.query(EventTeam.event == ndb.Key(Event, event_key)).fetch_async()
raise ndb.Return(event_teams)
class TeamParticipationQuery(DatabaseQuery):
CACHE_VERSION = 1
CACHE_KEY_FORMAT = 'team_participation_{}' # (team_key)
@ndb.tasklet
def _query_async(self):
team_key = self._query_args[0]
event_teams = yield EventTeam.query(EventTeam.team == ndb.Key(Team, team_key)).fetch_async(keys_only=True)
years = map(lambda event_team: int(event_team.id()[:4]), event_teams)
raise ndb.Return(set(years))
class TeamDistrictsQuery(DatabaseQuery):
CACHE_VERSION = 2
CACHE_KEY_FORMAT = 'team_districts_{}' # (team_key)
DICT_CONVERTER = DistrictConverter
@ndb.tasklet
def _query_async(self):
team_key = self._query_args[0]
district_team_keys = yield DistrictTeam.query(DistrictTeam.team == ndb.Key(Team, team_key)).fetch_async(keys_only=True)
districts = yield ndb.get_multi_async([ndb.Key(District, dtk.id().split('_')[0]) for dtk in district_team_keys])
raise ndb.Return(filter(lambda x: x is not None, districts))
|
I am 16 years old, play the violin, am on a rock climbing team, compete at science olympiad, and am in a couple school clubs. I am part of the Jewish congregation in town, am involved in my youth group, and go to a Jewish outdoor adventure summer camp.
I am part of ICEY because climate change scares me, and I believe that we can curb it if we all come together.
I love to hike and be outdoors. It’s a big part of my life and contributes to what makes me who I am. One of the first things I learned about being outdoors is the “leave no trace” rule, which is that you always leave the area you are in exactly the same or nicer than you found it. Why doesn’t this principle apply in the big picture? Why aren’t we trying to leave the world in a better condition than we found it in?
|
#!/usr/bin/python
# Copyright 2010 Google Inc.
# Licensed under the Apache License, Version 2.0
# http://www.apache.org/licenses/LICENSE-2.0
# Google's Python Class
# http://code.google.com/edu/languages/google-python-class/
# Problem description:
# https://developers.google.com/edu/python/exercises/copy-special
import sys
import re
import os
import shutil
import subprocess
import zipfile
"""Copy Special exercise
"""
def is_special(name):
if re.search(r"__.*__", name):
print(name + ' is special')
return True
print(name + ' not special')
return False
def get_special_paths(d):
special = []
for f in os.listdir(d):
file_to_check = d + '/' + f
if os.path.isfile(file_to_check) and is_special(f):
special.append(file_to_check)
return special
def copy_to_dir(paths, dest_dir):
if not os.path.exists(dest_dir):
os.makedirs(dest_dir)
for path in paths:
filename = path.split('/')[-1]
shutil.copyfile(path, dest_dir + '/' + filename)
pass
def copy_to_zip(paths, zip_fname):
with zipfile.ZipFile(zip_fname, 'w') as myzip:
for p in paths:
myzip.write(p)
pass
def main():
# This basic command line argument parsing code is provided.
# Add code to call your functions below.
# Make a list of command line arguments, omitting the [0] element
# which is the script itself.
args = sys.argv[1:]
if not args:
print("usage: [--todir dir][--tozip zipfile] dir [dir ...]")
sys.exit(1)
# todir and tozip are either set from command line
# or left as the empty string.
# The args array is left just containing the dirs.
todir = ''
if args[0] == '--todir':
todir = args[1]
del args[0:2]
tozip = ''
if args[0] == '--tozip':
tozip = args[1]
del args[0:2]
if len(args) == 0:
print("error: must specify one or more dirs")
sys.exit(1)
#check each directory given in the input
special_paths = []
for d in args:
print('processing ' + d)
special_paths.extend(get_special_paths(d))
if todir:
copy_to_dir(special_paths, todir)
if tozip:
copy_to_zip(special_paths, tozip)
if __name__ == "__main__":
main()
|
‘Tribes 1.1’ by Weird Dust is the third release on Brussels-based label and vinyl purveyor Crevette Records. The six tracks reflect an eclectic taste in hardware, melding organic and heavily processed sounds into a sort of musical retro-futurism. The percussion distinguishes itself, simultaneously free-form and mathematical, and certain to make you dance. Up close, ‘Tribes’ gestures at something alien, like going back to the jungle, but on another planet. This is what the future was like; notice the weird dust all around you.
|
import xxhash
from similarities.distances.sparse_vector import cosine_sparse_vector
class FeatureHash(object):
def __init__(self, tokens, length=100000):
"""Calculates a Charikar simhash with appropriate bitlength.
Input can be any iterable, but for strings it will automatically
break it into words first, assuming you don't want to iterate
over the individual characters. Returns nothing.
"""
if isinstance(tokens,basestring):
tokens = tokens.split()
v = {}
if isinstance(tokens,dict):
for value,w in tokens.iteritems():
k = xxhash.xxh64(value).intdigest()
x = v.get(k%length,0)
if k & 1 << 63:
v[k%length] = x + w
else:
v[k%length] = x - w
else:
for value in tokens:
k = xxhash.xxh64(value).intdigest()
x = v.get(k%length,0)
if k & 1 << 63:
v[k%length] = x + 1
else:
v[k%length] = x - 1
self.hash = v
self.vector = v
def similarity(self, other_hash):
"""Calculate how different this hash is from another simhash.
Returns a float from 0.0 to 1.0 (inclusive)
"""
return 1.0 - self.distance(other_hash)
def distance(self,other_hash):
return cosine_sparse_vector(self.hash, other_hash.hash)
def digest(self):
return self.hash
def __eq__(self, other):
return self.hash == other.hash
|
Shaker Style Cabinet Doors – This creative image collections about Shaker Style Cabinet Doors is accessible to download. We obtain this amazing image from internet and select the best for you. Shaker Style Cabinet Doors pictures and photos collection that posted here was carefully chosen and published by admin after selecting the ones which are great among the others. Did you know that Shaker Style Cabinet Doors has become the trend popular topics in this category? This is exactly why we are showing this topic at this time. We took this images from the web that we feel would be probably the most representative photos for Shaker Style Cabinet Doors. Many thanks for stopping by here, Here is a terrific images for Shaker Style Cabinet Doors. We have been looking for this pictures via internet and it came from qualified source. If you are searching for any unique fresh concept for your decoration ideas then this Shaker Style Cabinet Doors pictures must be on top of reference or you may use it for an alternative idea.
Shaker Style Cabinet Doors is one of the photos we found on the net from reputable sources. We decide to discuss this Shaker Style Cabinet Doors photos in this page because according to information from Google search engine, It is one of the top rated queries keyword on the internet. And we also believe you arrived here were looking for these details, are not You? From many choices on the internet we are sure this photos could be a best guide for you, and we sincerely hope you are delighted by what we present. This images galleries has been added by admin at 2017-07-15 00:40:15 and tagged in Shaker Style Cabinet Doors field. And we also trust it can be the most well liked vote in google vote or event in facebook share. We hope you like it as we do. Please publish and share this Shaker Style Cabinet Doors image for your mates, family through google plus, facebook, twitter, instagram or another social media site.
Shaker Style Cabinet Doors is one of raised niche at this time. We realize it from search engine data such as adwords or google trends. In an effort to give valuable info to our followers, we’ve attempted to find the nearest relevance pictures about Shaker Style Cabinet Doors. And here you will see now, this pictures have already been extracted from trustworthy source.
Shaker Style Cabinet Doors is a elegant Complete Home Design Ideas Compilation. This Shaker Style Cabinet Doors was upload in hope that we can give you an ideas to Redesign your Home. This page can be your reference when you are confused to choose the right redesign for your home. This Shaker Style Cabinet Doors maybe your good option to decor, because having a home with our own design is everyone’s dream.
|
from ..commons import OrderedDict
from ..commons.draw_utils import *
from .box import Box
from .sizes import *
from .time_slice_box import TimeSliceBox
from .. import settings
class PropTimeLineBox(Box):
TOTAL_LABEL_WIDTH = PROP_NAME_LABEL_WIDTH + PROP_VALUE_LABEL_WIDTH + 6*PROP_NAME_LABEL_RIGHT_PADDING
def __init__(self, prop_time_line, shape_time_line_box):
Box.__init__(self, shape_time_line_box)
self.prop_time_line = prop_time_line
self.time_slice_boxes = OrderedDict()
self.y_per_value = 1
self.min_value = -1
self.max_value = 1
self.slices_container_box = Box(self)
self.slices_container_box.left = self.TOTAL_LABEL_WIDTH
self.vertical_zoom = 1.
self.time_line_width = 0.
self.update()
def get_multi_shape_time_line(self):
return self.parent_box.get_multi_shape_time_line()
def get_time_slice_box_at_index(self, index):
return self.time_slice_boxes.get_item_at_index(index)
def set_time_multiplier(self, scale):
self.slices_container_box.scale_x = scale
def get_time_multiplier(self):
return self.slices_container_box.scale_x
def set_vertical_multiplier(self, scale):
self.slices_container_box.scale_y *= scale
def update(self):
min_value, max_value = self.prop_time_line.get_min_max_value()
if max_value == min_value:
max_value = min_value + 1
diff = float(max_value-min_value)
self.max_value = max_value + diff*.05
self.min_value = min_value - diff*.05
self.y_per_value = HEIGHT_PER_TIME_SLICE/(self.max_value-self.min_value)
for time_slice in self.time_slice_boxes.keys:
if not self.prop_time_line.time_slices.key_exists(time_slice):
self.time_slice_boxes.remove(time_slice)
scaled_width = 0
width = 0
height = 0
horiz_index = 0
for time_slice in self.prop_time_line.time_slices:
if not self.time_slice_boxes.key_exists(time_slice):
time_slice_box = TimeSliceBox(time_slice, self)
self.time_slice_boxes.insert(horiz_index, time_slice, time_slice_box)
else:
time_slice_box = self.time_slice_boxes[time_slice]
time_slice_box.set_index(horiz_index)
time_slice_box.update()
time_slice_box.left = width
time_slice_box.top = 0
outline = time_slice_box.get_rel_outline()
width += outline.width
if height<outline.height:
height = outline.height#*self.slices_container_box.scale_y
horiz_index += 1
self.width = width*self.slices_container_box.scale_x + self.slices_container_box.left
self.height = height*self.slices_container_box.scale_y + PROP_TIME_LINE_VERTICAL_PADDING
self.time_line_width = width
def draw(self, ctx, visible_time_span):
ctx.save()
self.pre_draw(ctx)
ctx.rectangle(0, 0, 2000, self.height-PROP_TIME_LINE_VERTICAL_PADDING)
ctx.restore()
draw_stroke(ctx, 1, "aaaaaa")
time_elapsed = 0
for time_slice in self.prop_time_line.time_slices:
if time_elapsed>visible_time_span.end:
break
if time_elapsed+time_slice.duration<visible_time_span.start:
time_elapsed += time_slice.duration
continue
time_slice_box = self.time_slice_boxes[time_slice]
ctx.save()
rel_visible_time_span = visible_time_span.copy()
rel_visible_time_span.start-= time_elapsed
rel_visible_time_span.end -= time_elapsed
if rel_visible_time_span.start<time_elapsed:
rel_visible_time_span.start =0
if rel_visible_time_span.end>time_slice.duration:
rel_visible_time_span.end = time_slice.duration
time_slice_box.draw(ctx, rel_visible_time_span)
ctx.restore()
time_elapsed += time_slice.duration
ctx.save()
self.pre_draw(ctx)
ctx.rectangle(-SHAPE_LINE_LEFT_PADDING-2, -5,
PropTimeLineBox.TOTAL_LABEL_WIDTH+SHAPE_LINE_LEFT_PADDING, self.height+5)
ctx.restore()
draw_fill(ctx, PROP_LEFT_BACK_COLOR)
draw_text(ctx,
self.prop_time_line.prop_name, 0, 0, font_name=settings.TIME_LINE_FONT,
width=PROP_NAME_LABEL_WIDTH,
text_color = PROP_NAME_TEXT_COLOR, padding=PROP_NAME_LABEL_RIGHT_PADDING,
border_color = PROP_NAME_BORDER_COLOR, border_width=2,
back_color = PROP_NAME_BACK_COLOR, pre_draw=self.pre_draw)
value_x_pos = PROP_NAME_LABEL_WIDTH + 3*PROP_NAME_LABEL_RIGHT_PADDING + PROP_VALUE_LABEL_WIDTH
draw_text(ctx, "{0:.2f}".format(self.max_value), font_name="7",
x=value_x_pos, align="right bottom-center",
width=PROP_VALUE_LABEL_WIDTH, fit_width=True,
y=0, text_color="000000", pre_draw=self.pre_draw)
draw_text(ctx, "{0:02.2f}".format(self.min_value), font_name="7",
x=value_x_pos, align="right bottom",
width=PROP_VALUE_LABEL_WIDTH, fit_width=True,
y=self.height, text_color="000000", pre_draw=self.pre_draw)
ctx.save()
self.pre_draw(ctx)
draw_straight_line(ctx, value_x_pos, 0, value_x_pos+2*PROP_NAME_LABEL_RIGHT_PADDING, 0)
draw_stroke(ctx, 1)
draw_straight_line(ctx, value_x_pos, self.height-END_POINT_HEIGHT*.5,
value_x_pos+2*PROP_NAME_LABEL_RIGHT_PADDING,
self.height-END_POINT_HEIGHT*.5)
draw_stroke(ctx, 1)
ctx.restore()
|
The Tower at Piedmont Center, Building 15, is X story, prominent Class A office building rising above an established, connected campus community. Under new local ownership, The Tower boasts a variety of on-site amenities that support business growth and talent retention. Interior common area renovations create an elevated sense of arrival while targeted conference center enhancements transform meeting areas into thoughtful spaces designed to boost productivity. With flexible suites suited for studios and full floor corporations alike, The Tower at Piedmont Center invites both seasoned and young professionals to stake their claim in a workplace built for quality and performance.
|
# -*- coding: utf-8 -*-
# Copyright © 2012-2016 Roberto Alsina and others.
# Permission is hereby granted, free of charge, to any
# person obtaining a copy of this software and associated
# documentation files (the "Software"), to deal in the
# Software without restriction, including without limitation
# the rights to use, copy, modify, merge, publish,
# distribute, sublicense, and/or sell copies of the
# Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice
# shall be included in all copies or substantial portions of
# the Software.
#
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY
# KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE
# WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS
# OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR
# OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
# SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""Beg the user to switch to python 3."""
import datetime
import os
import random
import sys
import doit.tools
from nikola.utils import get_logger, STDERR_HANDLER
from nikola.plugin_categories import LateTask
PY2_AND_NO_PY3_WARNING = """Nikola is going to deprecate Python 2 support in 2016. Your current
version will continue to work, but please consider upgrading to Python 3.
Please check http://bit.ly/1FKEsiX for details.
"""
PY2_WARNING = """Nikola is going to deprecate Python 2 support in 2016. You already have Python 3
available in your system. Why not switch?
Please check http://bit.ly/1FKEsiX for details.
"""
PY2_BARBS = [
"Python 2 has been deprecated for years. Stop clinging to your long gone youth and switch to Python3.",
"Python 2 is the safety blanket of languages. Be a big kid and switch to Python 3",
"Python 2 is old and busted. Python 3 is the new hotness.",
"Nice unicode you have there, would be a shame something happened to it.. switch to python 3!.",
"Don’t get in the way of progress! Upgrade to Python 3 and save a developer’s mind today!",
"Winners don't use Python 2 -- Signed: The FBI",
"Python 2? What year is it?",
"I just wanna tell you how I'm feeling\n"
"Gotta make you understand\n"
"Never gonna give you up [But Python 2 has to go]",
"The year 2009 called, and they want their Python 2.7 back.",
]
LOGGER = get_logger('Nikola', STDERR_HANDLER)
def has_python_3():
"""Check if python 3 is available."""
if 'win' in sys.platform:
py_bin = 'py.exe'
else:
py_bin = 'python3'
for path in os.environ["PATH"].split(os.pathsep):
if os.access(os.path.join(path, py_bin), os.X_OK):
return True
return False
class Py3Switch(LateTask):
"""Beg the user to switch to python 3."""
name = "_switch to py3"
def gen_tasks(self):
"""Beg the user to switch to python 3."""
def give_warning():
if sys.version_info[0] == 3:
return
if has_python_3():
LOGGER.warn(random.choice(PY2_BARBS))
LOGGER.warn(PY2_WARNING)
else:
LOGGER.warn(PY2_AND_NO_PY3_WARNING)
task = {
'basename': self.name,
'name': 'please!',
'actions': [give_warning],
'clean': True,
'uptodate': [doit.tools.timeout(datetime.timedelta(days=3))]
}
return task
|
The CRNAs are really nice and give you a lot of autonomy. Once you know what you are doing they let you induce the patients and run the case. You get plenty of a-lines, neuro cases, and trauma. Even a major trauma every once and a while.
Harris is overall a good/great clinical site. We work Monday-Friday 5:15ish to whenever your room is done–usually between 3-5 p.m. I normally have between 52-60 hours/week. No call shifts. There are always about 11-12 residents from TWU/TCU combined. We rotate working evening shifts (3-11). Every 2 weeks it’s someone else’s turn.
Good clinical site. I already have all my case numbers except for hearts. You get a good amount of central lines and art lines. Good staff. MDAs are ok. A few are a pain, but you get that every where.
Great experience.. You work 65-80 hours a week, 45 on a good week.. You are on call 4-6 x a month including a 24h call on weekends. Experience is amazing. Patients are mainly ASA 3-4. The only thing is that we don’t get that many hearts, but we do lots of thoracics, neuro, spines, and GI cases like whipples!
This is a small rural 2 OR facility with a GI room. Closely-knit staff. Slow pace. Complicated comorbidities may cause the case to be cancelled. Dr. Juan Quintana is the clinical coordination, Region 7 director of AANA.
|
import sys
sys.path.insert(1,"../../")
import h2o
from tests import pyunit_utils
def upload_import_small():
# Connect to a pre-existing cluster
various_datasets = ["smalldata/iris/iris.csv", "smalldata/iris/iris_wheader.csv", "smalldata/prostate/prostate.csv",
"smalldata/prostate/prostate_woheader.csv.gz"]
for dataset in various_datasets:
uploaded_frame = h2o.upload_file(pyunit_utils.locate(dataset))
imported_frame = h2o.import_file(pyunit_utils.locate(dataset))
rows_u, cols_u = uploaded_frame.dim
rows_i, cols_i = imported_frame.dim
assert rows_u == rows_i, "Expected same number of rows regardless of method. upload: {0}, import: " \
"{1}.".format(rows_u, rows_i)
assert cols_u == cols_i, "Expected same number of cols regardless of method. upload: {0}, import: " \
"{1}.".format(cols_u, cols_i)
if __name__ == "__main__":
pyunit_utils.standalone_test(upload_import_small)
else:
upload_import_small()
|
what special tools do i actually need ??
Depends upon how much you can fabricate yourself. The list is quite formidable with many fixtures being single purpose, and includes RMS and IMS tooling, cam timing equipment, special tools to hold the rods while putting the cases together, etc. You can easily spend more than a few quid getting ready to do one of these engines.............. If you can find one of the early printed OEM manual sets, they list the required tooling and fixtures.
Google Laser Tools 4717. I'd say that was the bare minimum you will need. You can buy them for about £400 if you shop around. I bought a set for a friend direct from the manufacturer in Taiwan, and if you drop me a PM I'll pass on the contact details if you want them.
Same tool set is available on eBay for $399 US. Just look for Porsche 996 tools.
|
import time
import re
from collections import defaultdict
PYCLASSNAME_RE = re.compile(r"(.+?)\(")
def construct_actor_groups(actors):
"""actors is a dict from actor id to an actor or an
actor creation task The shared fields currently are
"actorClass", "actorId", and "state" """
actor_groups = _group_actors_by_python_class(actors)
stats_by_group = {
name: _get_actor_group_stats(group)
for name, group in actor_groups.items()
}
summarized_actor_groups = {}
for name, group in actor_groups.items():
summarized_actor_groups[name] = {
"entries": group,
"summary": stats_by_group[name]
}
return summarized_actor_groups
def actor_classname_from_task_spec(task_spec):
return task_spec.get("functionDescriptor", {})\
.get("pythonFunctionDescriptor", {})\
.get("className", "Unknown actor class").split(".")[-1]
def _group_actors_by_python_class(actors):
groups = defaultdict(list)
for actor in actors.values():
actor_class = actor["actorClass"]
groups[actor_class].append(actor)
return dict(groups)
def _get_actor_group_stats(group):
state_to_count = defaultdict(lambda: 0)
executed_tasks = 0
min_timestamp = None
num_timestamps = 0
sum_timestamps = 0
now = time.time() * 1000 # convert S -> MS
for actor in group:
state_to_count[actor["state"]] += 1
if "timestamp" in actor:
if not min_timestamp or actor["timestamp"] < min_timestamp:
min_timestamp = actor["timestamp"]
num_timestamps += 1
sum_timestamps += now - actor["timestamp"]
if "numExecutedTasks" in actor:
executed_tasks += actor["numExecutedTasks"]
if num_timestamps > 0:
avg_lifetime = int((sum_timestamps / num_timestamps) / 1000)
max_lifetime = int((now - min_timestamp) / 1000)
else:
avg_lifetime = 0
max_lifetime = 0
return {
"stateToCount": state_to_count,
"avgLifetime": avg_lifetime,
"maxLifetime": max_lifetime,
"numExecutedTasks": executed_tasks,
}
|
The meaning of the sculpture is the expression “sitting on the fence”; two teenagers are thinking over life and pondering which way they would go. I gave a hint of a relationship between the two teenagers, but that's merely to make one think and to give it this extra dimension. The sculpture was purchased by Greensboro Beautiful and is displayed at the Bicentennial Garden in Greensboro, NC. It is also available in a smaller 8 inch sculpture.
|
#!/usr/bin/env python
'''
Script to convert a pcap file containing UDP and TCP packets to a corpus file.
'''
import sys, getopt, pprint, os
from sqlite3 import dbapi2 as sqlite
import pcap
from optparse import OptionParser
from socket import AF_INET, IPPROTO_UDP, IPPROTO_TCP, inet_ntop, ntohs, ntohl, inet_ntoa
import struct
from CorpusBuilder import CorpusBuilder
ETHERTYPE_IP = 0x0800 # IP protocol
ETHERTYPE_ARP = 0x0806 # Addr. resolution protocol
ETHERTYPE_REVARP = 0x8035 # reverse Addr. resolution protocol
ETHERTYPE_VLAN = 0x8100 # IEEE 802.1Q VLAN tagging
ETHERTYPE_IPV6 = 0x86dd # IPv6
#
# A dictionary of active TCP streams
#
tcp_streams = {}
#
# A dictionary of UDP streams
#
udp_streams = {}
#
# Current stream id
cur_stream_id = 0
def usage(exeName) :
errmsg = "Usage: %s -i <pcap-file> -o <sqlite-file>"
errmsg = errmsg % exeName
print >> sys.stderr, errmsg
sys.exit(-1)
class FiveTuple(object):
def __init__(self, protocol, src_addr, src_port, dst_addr, dst_port):
self.protocol = protocol
self.src_addr = src_addr
self.src_port = src_port
self.dst_addr = dst_addr
self.dst_port = dst_port
def __str__(self):
return "%d,%s,%d,%s,%d" % (self.protocol, self.src_addr, self.src_port, self.dst_addr, self.dst_port)
class UdpSegment:
"""Definition of a UDP segment
"""
def __init__(self, five_tuple, header, payload):
self.five_tuple = five_tuple
self.udp_header = header
self.udp_payload = payload
class TcpSegment:
"""Definition of a TCP segment
"""
def __init__(self, five_tuple, header, payload):
self.five_tuple = five_tuple
self.tcp_header = header
self.tcp_payload = payload
self.tcp_sequence_number, self.tcp_acknowledgement_number = struct.unpack('!LL', header[4:12])
def opt_isset_FIN(self):
opts = ord(self.tcp_header[13]) & 0x3F
return (opts & 0x01)
def opt_isset_SYN(self):
opts = ord(self.tcp_header[13]) & 0x3F
return (opts & 0x02)
def get_sequence_number(self):
return self.tcp_sequence_number
def __cmp__(self, other):
return cmp(self.tcp_sequence_number, other.tcp_sequence_number)
class TcpStream:
"""Definition of a TCP stream.
"""
TCP_STREAM_ACTIVE = 0x1
TCP_STREAM_CLOSED = 0x02
def __init__(self, five_tuple):
self.five_tuple = five_tuple
self.initial_sequence_number = 0
self.segments = []
def reset_stream(self):
self.segments = []
self.initial_sequence_number = 0
def set_initial_sequence_number(self, sequence_number):
self.initial_sequence_number = sequence_number
def append_segment(self, tcp_segment):
if len(self.segments) == 0:
self.set_initial_sequence_number(tcp_segment.get_sequence_number())
self.segments.append(tcp_segment)
def get_segments_sorted(self):
return sorted(self.segments)
class UdpStream:
"""A container for UDP packets that share the same 5-tuple
"""
def __init__(self, five_tuple):
self.five_tuple = five_tuple
self.segments = []
def append_segment(self, udp_segment):
self.segments.append(udp_segment)
def newStream(five_tuple):
'''
Create a new stream using the arguments passed-in and return its ID.
'''
global cur_stream_id
stream_id = cur_stream_id
cur_stream_id += 1
return stream_id
def process_tcp_segment(builder, segment):
"""Process a tcp segment. It checks for SYN and FIN segments are
if set modifies the associated stream.
"""
segment_id = str(segment.five_tuple)
if segment_id in tcp_streams:
m_tcp_stream = tcp_streams[segment_id]
m_tcp_stream.append_segment(segment)
else:
m_tcp_stream = TcpStream(segment.five_tuple)
m_tcp_stream.append_segment(segment)
tcp_streams[segment_id] = m_tcp_stream
if segment.opt_isset_SYN():
m_tcp_stream.segments = []
if segment.opt_isset_FIN():
#
# Finished with the stream - add the segments in the
# stream to db allowing the stream to be reused.
#
db_add_tcp_stream_segments(builder, m_tcp_stream)
del tcp_streams[segment_id]
def process_udp_segment(builder, segment):
""" Process a UDP segment. Given the connectionless nature of the UDP
protocol we simple accumulate the segment for later processing
when all the packets have been read
"""
segment_id = str(segment.five_tuple)
if segment_id in udp_streams:
m_udp_stream = udp_streams[segment_id]
m_udp_stream.append_segment(segment)
else:
m_udp_stream = UdpStream(segment.five_tuple)
m_udp_stream.append_segment(segment)
udp_streams[segment_id] = m_udp_stream
def db_add_tcp_stream_segments(builder, tcp_stream):
"""Add the contents of a tcp stream to the database
"""
tcp_segments = tcp_stream.get_segments_sorted()
last_sequence_num = 0
streamID = None
for tcp_segment in tcp_segments:
if (len(tcp_segment.tcp_payload) > 0) and (tcp_segment.tcp_sequence_number > last_sequence_num):
#
# Segment with an actual payload - add it to the stream's
# list of chunks.
#
# Note: delay creating the stream until we have a via chunk to
# commit to it
#
if streamID == None:
streamID = newStream(tcp_stream.five_tuple)
builder.add_chunk(streamID, tcp_segment.tcp_payload)
last_sequence_num = tcp_segment.tcp_sequence_number
def db_add_udp_stream_segments(builder, udp_stream):
"""Add the contents of a UDP stream to the database. Since UDP is
connection-less, a UDP stream object is really just an accumulation
of all the packets associated with a given 5-tuple.
"""
udp_segments = udp_stream.segments
streamID = None
for udp_segment in udp_segments:
if len(udp_segment.udp_payload) > 0:
if streamID == None:
streamID = newStream(udp_stream.five_tuple)
builder.add_chunk(streamID, udp_segment.udp_payload)
def enchunk_pcap(pcapFN, sqliteFN):
"""Read the contents of a pcap file with name @pcapFN and produce
a sqlite db with name @sqliteFN. It will contain chunks of data
from TCP and UDP streams,
"""
if not os.path.exists(pcapFN):
print >> sys.stderr, "Input file '%s' does not exist. Exiting." % pcapFN
sys.exit(-1)
builder = CorpusBuilder(sqliteFN)
#
# Read in the contents of the pcap file, adding stream segments as found
#
pkt_cnt = 0
ip_pkt_cnt = 0
ip_pkt_off = 0
unsupported_ip_protocol_cnt = 0
pcap_ref = pcap.pcap(pcapFN)
done = False
while not done:
try:
ts, packet = pcap_ref.next()
except:
break
pkt_cnt += 1
linkLayerType = struct.unpack('!H', packet[(pcap_ref.dloff - 2):pcap_ref.dloff])[0]
#
# We're only interested in IP packets
#
if linkLayerType == ETHERTYPE_VLAN:
linkLayerType = struct.unpack('!H', packet[(pcap_ref.dloff + 2):(pcap_ref.dloff + 4)])[0]
if linkLayerType != ETHERTYPE_IP:
continue
else:
ip_pkt_off = pcap_ref.dloff + 4
elif linkLayerType == ETHERTYPE_IP:
ip_pkt_off = pcap_ref.dloff
else:
continue
ip_pkt_cnt += 1
ip_pkt_total_len = struct.unpack('!H', packet[ip_pkt_off + 2: ip_pkt_off + 4])[0]
ip_pkt = packet[ip_pkt_off:ip_pkt_off + ip_pkt_total_len]
pkt_protocol = struct.unpack('B', ip_pkt[9])[0]
if (pkt_protocol != IPPROTO_UDP) and (pkt_protocol != IPPROTO_TCP):
#
# we're only interested in UDP and TCP packets at the moment
#
continue
pkt_src_addr = inet_ntoa(ip_pkt[12:16])
pkt_dst_addr = inet_ntoa(ip_pkt[16:20])
ip_hdr_len_offset = (ord(ip_pkt[0]) & 0x0f) * 4
ip_payload = ip_pkt[ip_hdr_len_offset:len(ip_pkt)]
pkt_src_port, pkt_dst_port = struct.unpack('!HH', ip_payload[0:4])
five_tuple = FiveTuple(pkt_protocol, pkt_src_addr, pkt_src_port, pkt_dst_addr, pkt_dst_port)
five_tuple_id = str(five_tuple)
if pkt_protocol == IPPROTO_UDP:
udp_payload_len = struct.unpack('!H', ip_payload[4:6])[0] - 8
udp_header = ip_payload[0:8]
udp_payload = ip_payload[8:len(ip_payload)]
udp_segment = UdpSegment(five_tuple, udp_header, udp_payload)
process_udp_segment(builder, udp_segment)
elif pkt_protocol == IPPROTO_TCP:
tcp_hdr_len = (ord(ip_payload[12]) >> 4) * 4
tcp_header = ip_payload[0:tcp_hdr_len]
tcp_payload = ip_payload[tcp_hdr_len:len(ip_payload)]
segment = TcpSegment(five_tuple, tcp_header, tcp_payload)
process_tcp_segment(builder, segment)
#
# Having read the contents of the pcap, we fill the database with any
# remaining TCP and UDP segments
#
for tcp_stream in tcp_streams.itervalues():
db_add_tcp_stream_segments(builder, tcp_stream)
for udp_stream in udp_streams.itervalues():
db_add_udp_stream_segments(builder, udp_stream)
#
# We've finished with the database
#
builder.finish()
if __name__ == '__main__' :
args = getopt.getopt(sys.argv[1:], 'i:o:')
args = dict(args[0])
requiredKeys = [ '-i', '-o']
for k in requiredKeys :
if not args.has_key(k) :
usage(os.path.basename(sys.argv[0]))
fnArgs = tuple([ args[k] for k in requiredKeys ])
enchunk_pcap(*fnArgs)
|
Many people involved in bodybuilding work endlessly to promote the best interests of the sport, but are sometimes never recognized for their efforts.
Through my Classic Anatomy Gym Bodybuilding Hall of Fame Award I want to show my support and appreciation for their hard work and dedication to our sport.
The most important consideration for their membership is selflessness. Someone willing to help out others and never expect anything in return. This is a rare commodity in today?s world.
There are many ?un-sung? heroes in our sport. Some are past ?greats? who are now often forgotten. People that inspired a whole generation of bodybuilders to reach their full potential. Newcomers to the sport tend to only know current names and faces.
There are bodybuilding writers, photographers and videographers who deserve to be remembered and honored. People who lead the way in this sport before fitness became ?big business?.
|
"""
#How to use
#run ipython
execfile('immcheck.py')
iname = '/local/testa_00200-01199.imm'
checkFile(iname)
#to read headers of images
fp = open(iname,'r')
h = readHeader(fp)
print h
#call if you are skipping iamge data, and not reading iamge data.
getNextHeaderPos(fp,h)
h = readHeader(fp)
print h
getNextHeaderPos(fp,h)
clf()
h = readHeader(fp)
img = getImage(fp,h)
#do not call getNextHeader. getImage seeks to next header already
print h
#displauy image
figimage(img)
clf()
h = readHeader(fp)
img = getImage(fp,h)
#do not call getNextHeader. getImage seeks to next header already
print h
#displauy image
figimage(img)
#etc ..\
#raw IMM
iname = '/local/testyraw_00100-00199.imm'
checkFile(iname)
fp.close()
"""
import struct
#comment these ouit of not drawing images, but only reading headers"
import numpy as np
import matplotlib.pyplot as plt
imm_headformat = "ii32s16si16siiiiiiiiiiiiiddiiIiiI40sf40sf40sf40sf40sf40sf40sf40sf40sf40sfffiiifc295s84s12s"
imm_fieldnames = [
'mode',
'compression',
'date',
'prefix',
'number',
'suffix',
'monitor',
'shutter',
'row_beg',
'row_end',
'col_beg',
'col_end',
'row_bin',
'col_bin',
'rows',
'cols',
'bytes',
'kinetics',
'kinwinsize',
'elapsed',
'preset',
'topup',
'inject',
'dlen',
'roi_number',
'buffer_number',
'systick',
'pv1',
'pv1VAL',
'pv2',
'pv2VAL',
'pv3',
'pv3VAL',
'pv4',
'pv4VAL',
'pv5',
'pv5VAL',
'pv6',
'pv6VAL',
'pv7',
'pv7VAL',
'pv8',
'pv8VAL',
'pv9',
'pv9VAL',
'pv10',
'pv10VAL',
'imageserver',
'CPUspeed',
'immversion',
'corecotick',
'cameratype',
'threshhold',
'byte632',
'empty_space',
'ZZZZ',
'FFFF'
]
iname = '/local/testa_00200-01199.imm'
def checkFile(fname):
fp = open(fname,'rb')
lastcor=-1
lastbn = -1;
n_corerror = 0
n_bnerror = 0
while True:
h = readHeader(fp)
if h!='eof':
print 'buffer number %d'%h['buffer_number']
print 'corecotick %d'%h['corecotick']
if lastbn==-1: lastbn = h['buffer_number']-1
if lastcor==-1: lastcor = h['corecotick']-1
dbn = h['buffer_number'] - lastbn
dcor = h['corecotick'] - lastcor
if dbn>1: n_bnerror=n_bnerror+1
if dcor>1: n_corerror = n_corerror+1
lastbn = h['buffer_number']
lastcor = h['corecotick']
getNextHeaderPos(fp,h)
else: break
print "Skipped Buffer numbers %d"%n_bnerror
print "Skipped Corecoticks %d"%n_corerror
fp.close()
def readHeader(fp):
bindata = fp.read(1024)
if bindata=='':
return('eof')
imm_headerdat = struct.unpack(imm_headformat,bindata)
imm_header ={}
for k in range(len(imm_headerdat)):
imm_header[imm_fieldnames[k]]=imm_headerdat[k]
return(imm_header)
def getNextHeaderPos(fp,header):
dlen = header['dlen']
if header['compression']==6:
fp.seek(dlen*6,1)
else:
fp.seek(dlen*2,1)
#getImage requres numpy, comment out of no numpy
def getImage(fp,h):
dlen = h['dlen']
if h['compression']==6:
loc_b = fp.read(4*dlen)
pixloc = struct.unpack('%di'%dlen,loc_b)
val_b = fp.read(2*dlen)
pixval = struct.unpack('%dH'%dlen,val_b)
imgdata = np.array( [0] * (h['rows'] * h['cols']))
for k in range(dlen):
imgdata[ pixloc[k] ] = pixval[k]
else:
pixdat=fp.read(2*dlen)
pixvals=struct.unpack('%dH'%dlen,pixdat)
imgdata=np.array(pixvals)
imgdata = imgdata.reshape(h['rows'], h['cols'])
return(imgdata)
|
“Magic Night” from Pushing Daisies (Season 2) (advance) by Dooley, Jim. Released: 2011. Track 5 of 31. Genre: Soundtrack.
|
# -*- coding: utf-8 -*-
#
# Copyright (c) 2015, Alcatel-Lucent Inc, 2017 Nokia
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions are met:
# * Redistributions of source code must retain the above copyright
# notice, this list of conditions and the following disclaimer.
# * Redistributions in binary form must reproduce the above copyright
# notice, this list of conditions and the following disclaimer in the
# documentation and/or other materials provided with the distribution.
# * Neither the name of the copyright holder nor the names of its contributors
# may be used to endorse or promote products derived from this software without
# specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
# DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY
# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
from .fetchers import NUWANServicesFetcher
from .fetchers import NUMetadatasFetcher
from .fetchers import NUWirelessPortsFetcher
from .fetchers import NUGlobalMetadatasFetcher
from .fetchers import NUPortsFetcher
from .fetchers import NUNSPortsFetcher
from .fetchers import NUEventLogsFetcher
from bambou import NURESTObject
class NUAutoDiscoveredGateway(NURESTObject):
""" Represents a AutoDiscoveredGateway in the VSD
Notes:
Represents Auto discovered Gateway.
"""
__rest_name__ = "autodiscoveredgateway"
__resource_name__ = "autodiscoveredgateways"
## Constants
CONST_PERSONALITY_HARDWARE_VTEP = "HARDWARE_VTEP"
CONST_PERSONALITY_VSA = "VSA"
CONST_PERSONALITY_VSG = "VSG"
CONST_ENTITY_SCOPE_GLOBAL = "GLOBAL"
CONST_PERSONALITY_OTHER = "OTHER"
CONST_PERSONALITY_VRSB = "VRSB"
CONST_PERSONALITY_NSG = "NSG"
CONST_PERSONALITY_VRSG = "VRSG"
CONST_PERSONALITY_DC7X50 = "DC7X50"
CONST_ENTITY_SCOPE_ENTERPRISE = "ENTERPRISE"
def __init__(self, **kwargs):
""" Initializes a AutoDiscoveredGateway instance
Notes:
You can specify all parameters while calling this methods.
A special argument named `data` will enable you to load the
object from a Python dictionary
Examples:
>>> autodiscoveredgateway = NUAutoDiscoveredGateway(id=u'xxxx-xxx-xxx-xxx', name=u'AutoDiscoveredGateway')
>>> autodiscoveredgateway = NUAutoDiscoveredGateway(data=my_dict)
"""
super(NUAutoDiscoveredGateway, self).__init__()
# Read/Write Attributes
self._name = None
self._last_updated_by = None
self._gateway_id = None
self._peer = None
self._personality = None
self._description = None
self._entity_scope = None
self._controllers = None
self._vtep = None
self._external_id = None
self._system_id = None
self.expose_attribute(local_name="name", remote_name="name", attribute_type=str, is_required=True, is_unique=False)
self.expose_attribute(local_name="last_updated_by", remote_name="lastUpdatedBy", attribute_type=str, is_required=False, is_unique=False)
self.expose_attribute(local_name="gateway_id", remote_name="gatewayID", attribute_type=str, is_required=False, is_unique=False)
self.expose_attribute(local_name="peer", remote_name="peer", attribute_type=str, is_required=False, is_unique=False)
self.expose_attribute(local_name="personality", remote_name="personality", attribute_type=str, is_required=True, is_unique=False, choices=[u'DC7X50', u'HARDWARE_VTEP', u'NSG', u'OTHER', u'VRSB', u'VRSG', u'VSA', u'VSG'])
self.expose_attribute(local_name="description", remote_name="description", attribute_type=str, is_required=False, is_unique=False)
self.expose_attribute(local_name="entity_scope", remote_name="entityScope", attribute_type=str, is_required=False, is_unique=False, choices=[u'ENTERPRISE', u'GLOBAL'])
self.expose_attribute(local_name="controllers", remote_name="controllers", attribute_type=list, is_required=False, is_unique=False)
self.expose_attribute(local_name="vtep", remote_name="vtep", attribute_type=str, is_required=False, is_unique=False)
self.expose_attribute(local_name="external_id", remote_name="externalID", attribute_type=str, is_required=False, is_unique=True)
self.expose_attribute(local_name="system_id", remote_name="systemID", attribute_type=str, is_required=False, is_unique=False)
# Fetchers
self.wan_services = NUWANServicesFetcher.fetcher_with_object(parent_object=self, relationship="child")
self.metadatas = NUMetadatasFetcher.fetcher_with_object(parent_object=self, relationship="child")
self.wireless_ports = NUWirelessPortsFetcher.fetcher_with_object(parent_object=self, relationship="child")
self.global_metadatas = NUGlobalMetadatasFetcher.fetcher_with_object(parent_object=self, relationship="child")
self.ports = NUPortsFetcher.fetcher_with_object(parent_object=self, relationship="child")
self.ns_ports = NUNSPortsFetcher.fetcher_with_object(parent_object=self, relationship="child")
self.event_logs = NUEventLogsFetcher.fetcher_with_object(parent_object=self, relationship="child")
self._compute_args(**kwargs)
# Properties
@property
def name(self):
""" Get name value.
Notes:
Name of the Gateway
"""
return self._name
@name.setter
def name(self, value):
""" Set name value.
Notes:
Name of the Gateway
"""
self._name = value
@property
def last_updated_by(self):
""" Get last_updated_by value.
Notes:
ID of the user who last updated the object.
This attribute is named `lastUpdatedBy` in VSD API.
"""
return self._last_updated_by
@last_updated_by.setter
def last_updated_by(self, value):
""" Set last_updated_by value.
Notes:
ID of the user who last updated the object.
This attribute is named `lastUpdatedBy` in VSD API.
"""
self._last_updated_by = value
@property
def gateway_id(self):
""" Get gateway_id value.
Notes:
The Gateway associated with this Auto Discovered Gateway. This is a read only attribute
This attribute is named `gatewayID` in VSD API.
"""
return self._gateway_id
@gateway_id.setter
def gateway_id(self, value):
""" Set gateway_id value.
Notes:
The Gateway associated with this Auto Discovered Gateway. This is a read only attribute
This attribute is named `gatewayID` in VSD API.
"""
self._gateway_id = value
@property
def peer(self):
""" Get peer value.
Notes:
The System ID of the peer gateway associated with this Gateway instance when it is discovered by the network manager (VSD) as being redundant.
"""
return self._peer
@peer.setter
def peer(self, value):
""" Set peer value.
Notes:
The System ID of the peer gateway associated with this Gateway instance when it is discovered by the network manager (VSD) as being redundant.
"""
self._peer = value
@property
def personality(self):
""" Get personality value.
Notes:
Personality of the Gateway - VSG,VRSG,NONE,OTHER, cannot be changed after creation.
"""
return self._personality
@personality.setter
def personality(self, value):
""" Set personality value.
Notes:
Personality of the Gateway - VSG,VRSG,NONE,OTHER, cannot be changed after creation.
"""
self._personality = value
@property
def description(self):
""" Get description value.
Notes:
A description of the Gateway
"""
return self._description
@description.setter
def description(self, value):
""" Set description value.
Notes:
A description of the Gateway
"""
self._description = value
@property
def entity_scope(self):
""" Get entity_scope value.
Notes:
Specify if scope of entity is Data center or Enterprise level
This attribute is named `entityScope` in VSD API.
"""
return self._entity_scope
@entity_scope.setter
def entity_scope(self, value):
""" Set entity_scope value.
Notes:
Specify if scope of entity is Data center or Enterprise level
This attribute is named `entityScope` in VSD API.
"""
self._entity_scope = value
@property
def controllers(self):
""" Get controllers value.
Notes:
Controllers to which this gateway instance is associated with.
"""
return self._controllers
@controllers.setter
def controllers(self, value):
""" Set controllers value.
Notes:
Controllers to which this gateway instance is associated with.
"""
self._controllers = value
@property
def vtep(self):
""" Get vtep value.
Notes:
Represent the system ID or the Virtual IP of a service used by a Gateway (VSG for now) to establish a tunnel with a remote VSG or hypervisor. The format of this field is consistent with an IP address.
"""
return self._vtep
@vtep.setter
def vtep(self, value):
""" Set vtep value.
Notes:
Represent the system ID or the Virtual IP of a service used by a Gateway (VSG for now) to establish a tunnel with a remote VSG or hypervisor. The format of this field is consistent with an IP address.
"""
self._vtep = value
@property
def external_id(self):
""" Get external_id value.
Notes:
External object ID. Used for integration with third party systems
This attribute is named `externalID` in VSD API.
"""
return self._external_id
@external_id.setter
def external_id(self, value):
""" Set external_id value.
Notes:
External object ID. Used for integration with third party systems
This attribute is named `externalID` in VSD API.
"""
self._external_id = value
@property
def system_id(self):
""" Get system_id value.
Notes:
Identifier of the Gateway
This attribute is named `systemID` in VSD API.
"""
return self._system_id
@system_id.setter
def system_id(self, value):
""" Set system_id value.
Notes:
Identifier of the Gateway
This attribute is named `systemID` in VSD API.
"""
self._system_id = value
|
With iconic Monogram canvas on one side and supple Taïga leather on the other, the LV Initials 40mm reversible belt is a versatile accessory. Its focal point is the polished metal that frames the subtle tone-on-tone LV Initials buckle. An understated design, it complements the Spring-Summer 2019 menswear collection effortlessly.
|
#!/usr/bin/python
"""
Copyright (C) 2012 Kolibre
This file is part of kolibre-clientcore.
Kolibre-clientcore is free software: you can redistribute it and/or modify
it under the terms of the GNU Lesser General Public License as published by
the Free Software Foundation, either version 2.1 of the License, or
(at your option) any later version.
Kolibre-clientcore is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Lesser General Public License for more details.
You should have received a copy of the GNU Lesser General Public License
along with kolibre-clientcore. If not, see <http://www.gnu.org/licenses/>.
"""
from BaseHTTPServer import BaseHTTPRequestHandler, HTTPServer
import getopt, sys, os, re, ssl
input = None
orderfile = None
class FakeSoapServer(BaseHTTPRequestHandler):
# override log request method
def log_request(self, code=None, size=None):
pass
def do_GET(self):
self.send_error(404,'File Not Found: %s' % self.path)
def do_POST(self):
response = self.getResponse()
self.send_response(200)
self.send_header('Content-Length', len(response))
self.send_header('Content-Type', 'text/xml; charset=utf-8')
self.end_headers()
self.wfile.write(response)
self.incrementOrder()
return
def getSoapAction(self):
action = None
for name, value in sorted(self.headers.items()):
if name.lower() == "soapaction":
action = value.strip()
if action is None:
self.sendInternalError('SOAPAction not found in header')
return
return action.lstrip('"/').rstrip('"')
def getOrder(self):
f = open(orderfile, 'r')
order = int(f.read())
self.order = order
f.close()
return order
def incrementOrder(self):
f = open(orderfile, 'w')
order = self.order + 1
f.write(str(order))
f.close()
def getResponse(self):
SoapAction = self.getSoapAction()
responsefile = input + '/' + str(self.getOrder()) + '_' + SoapAction
if not os.path.exists(responsefile):
self.sendInternalError('input file ' + responsefile + ' not found')
return
f = open(responsefile)
content = f.read()
f.close()
# pattern for finding beginning of soap envelope "<SOAP-ENV:Envelope"
pattern = re.compile('<[\w-]*:Envelope', re.IGNORECASE)
# find last position of pattern and substring from there
matches = pattern.findall(content)
start = content.rfind(matches[len(matches)-1])
body = content[start:]
# manipulate response if SOAPAction is logOn
if SoapAction == 'logOn':
request_len = int(self.headers.getheader('content-length'))
request = self.rfile.read(request_len)
if 'incorrect' in request:
body = body.replace('logOnResult>true', 'logOnResult>false')
return body
def sendInternalError(self, faultstring):
soapfault = '<SOAP-ENV:Envelope xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/" xmlns:ns1="http://www.daisy.org/ns/daisy-online/"><SOAP-ENV:Body><SOAP-ENV:Fault><faultcode>SOAP-ENV:Server</faultcode><faultstring>' + faultstring + '</faultstring><faultactor></faultactor><detail><ns1:internalServerErrorFault/></detail></SOAP-ENV:Fault></SOAP-ENV:Body></SOAP-ENV:Envelope>'
self.send_response(500)
self.send_header('Content-Length', len(soapfault))
self.send_header('Content-Type', 'text/xml; charset=utf-8')
self.end_headers()
self.wfile.write(soapfault)
return
def usage():
print ''
print 'usage: python ' + sys.argv[0] + ' -i <dir> -o <file>'
print
print 'required arguments:'
print ' -i, --input <dir>\t\tpath to folder containing soap responses'
print ' -o, --order <file>\t\tpath to file controlling the order'
print
print 'optional arguments:'
print ' -p, --port <port>\t\tport to listen on [default: 8080]'
print ' -s, --ssl\t\t\tuse ssl'
print ' -c, --cert <cert>\t\tpath to certificate'
print ' -h, --help\t\t\tshow this help message and exit'
if __name__ == '__main__':
host = 'localhost'
port = 8080
secure = False
cert = None
# parse command line options
try:
opts, args = getopt.getopt(sys.argv[1:], 'hp:i:o:sc:', ['help', 'port', 'input', 'order', 'ssl', 'cert'])
except getopt.GetoptError, err:
sys.stderr.write(str(err)) # will print something like "option -a not recognized"
usage()
sys.exit(2)
for opt, value in opts:
if opt in ('-h', '--help'):
usage()
sys.exit()
elif opt in ('-p', '--port'):
port = int(value)
elif opt in ('-i', '--input'):
input = value
elif opt in ('-o', '--order'):
orderfile = value
elif opt in ('-s', '--ssl'):
secure = True
elif opt in ('-c', '--cert'):
cert = value
# check if input exists
if input is None:
usage()
sys.exit()
if not os.path.exists(input):
sys.stderr.write("error: input '" + input + "' does not exists\n")
sys.exit(2)
if orderfile is None:
usage()
sys.exit()
if not os.path.exists(orderfile):
sys.stderr.write("error: orderfile '" + orderfile + "' does not exists\n")
sys.exit(2)
if secure and cert is None:
sys.stderr.write("error: specify a certificate to use with ssl\n")
sys.exit(2)
if secure and not os.path.exists(cert):
sys.stderr.write("error: certificate '" + cert + "' does not exists\n")
sys.exit(2)
# start server
try:
server = HTTPServer(('', port), FakeSoapServer)
if secure:
server.socket = ssl.wrap_socket(server.socket, certfile=cert, server_side=True)
server.serve_forever()
except KeyboardInterrupt:
server.socket.close()
|
Newly updated 2018! Main Living area is spacious & bright with access to private balcony for endless views of the Gulf of Mexico.
Quaint dining area for sharing family meals, playing games together or just talking together about all the great & fun events at the beach for the day.
View of the newly updated kitchen & breakfast bar along with the new queen size sleeper sofa.
Kitchen breakfast bar is perfect for meals, snacks and entertaining with afternoon cocktails to enjoy in the unit or bring to the beach.
Updated kitchen cabinets add clean lines to the already open concept & upscale look of the granite counter tops.
View of the open concept kitchen & spacious main living area with access to your private balcony that over looks the Calypso resort campus, white powder sands beach and Gulf of Mexico.
Master bedroom has a king size bed for a restful sleep.
Master bedroom has a large dresser for storage & flat screen TV with extended channels.
Master en suite has a marbled tub / shower combo and granite vanity with cabinet storage.
Guest bath has a large walk in shower with glass doors, granite counter top vanity and cabinet storage.
Convenient, private laundry in unit saves you trips to the laundromat!
View of one of the cozy chairs on the large balcony to enjoy reading a book or magazine with a morning coffee, tea or afternoon cocktail.
View of the brilliant blue green Gulf of Mexico, white powder sand beach. Perfect elevation to see dolphins, manatees sea rays and schools of fish.
Wake up and sip your morning coffee or juice and take in the gentle morning rolling waves and beautiful views of the beach... from your private balcony.
View from your balcony to the west with the City Pier as the back drop.
Watch the weather change in the distance as you sit on your balcony in peace.
View of the east beach side pool & Calypso Resort Tiki Bar in the morning.
Calypso Resort is the closest resort on the beach to Pier Park ~ we are located right across the street ~ no need to get in the car again.
Jet-skiing tubing, riding on the banana boat, snorkeling, plus more fun beach things to do at Calypso Resort Beach~ for Calypso Guests only!
Experience the spectacular fireworks display from the City Pier from your private balcony. No reason to fight the traffic or crowds.
New Years Eve and the 4th of July are 2 of the several occasions throughout the year for fireworks displays from the City Pier.
Daily helicopter rides ~ reserve your flight today!
Enjoy sunsets by the poolside with cocktails and friends for a close to the day of your Calypso Resort vacation.
Enjoy your cocktails with breath-taking views of the Gulf of Mexico rolling onto the powder white sands from your own private balcony watching unforgettable sunsets or walk to Pier Park for dinner and a movie. You can enjoy it all in this luxurious, beach front Panama City Beach condo filled with all the amenities. With its cheerful coastal décor, spacious as well as located on the 14th floor of the West Tower, Calypso 1403W is a perfect unit for every guest. This unit has all the modern amenities- high speed internet/ WiFi, flat screen TV's in living area and master bedroom, fully stocked kitchen with all the essentials, bed and bathroom linens; and a private balcony illuminating the breath-taking views from main living area. The total square footage for this unit is 945 sq ft.
The living area is spacious with custom furnishings and artwork, as well as providing modern amenities like a flat screen TV, DVD player and high speed internet/ WiFi. The sofa can be converted to a queen/full size pullout for extra guests. Adjacent to the living area is a formal dining table.
Very Clean Cozy Beach Home!
Thank you for letting us rent your home. Its a very clean cozy beach home. I loved all of the beach decor. It is definitely something I would like to rent again. After nine months of major surgery and major chemo it was just the place we need to go to relax and enjoy the beach, sun and sand. I'm so ready to go back soon.
We loved this rental. It included everything that we needed. We would prefer more towels be included. We were there for a ball tournament and between ballgames and beach time and dinners out we had to wash towels very often. Overall very satisfied with the rental and highly recommend.
We have enjoyed our 6 weeks and are booked in for2019.A beautifully decorated clean cheery unit.The kitchen was well equipped .We have stayed at the Calypso since it first opened and this is by far the nicest accommodation we have had.The staff of White Sands Rental were all very helpful and accommodating.
Loved the condo and its location! The view was beautiful! The short walk to Pier Park was a plus. Thank you for the opportunity to stay in your condo! Hope to return!
Great views and lovely place to stay!
My daughter and I have stayed at at your place twice now and have loved every minute of it. We never have any problems and enjoy ourselves very much with the great views and lovely place to stay. We will be back many more times. Thanks for making our trip a great one. Melissa and Carley.
Thank you! Your condo is perfect in every way; location, cleanliness, and you make booking so easy! Danielle, Kim and Jan were all quick to respond with any questions we had, great customer service all the way. And the view from your balcony - WOW...we will return!!
Thank you Jan & Danielle! Our condo rental was amazing! The balcony view was breathtaking! This is definitely the best location to be at & very close to everything!
We absolutely enjoyed ourselves! Conveinent location to pier park! Pools and beach area was nice.
We enjoyed our stay in this condo. The blackout shades made it really easy to sleep in each morning. The condo was super clean, and the reserved beach chairs were an added bonus. The 14th floor was high enough that we didn't hear any noise below, and it has a great view from the balcony. Danielle was super easy to work with, so we will definitely be back soon!
Thank you Jeff & Marita. We're glad you enjoyed the unit and the convenience of the complimentary beach chair service that comes with this unit. We hope to have you back with us again soon!
Thank you Derek, we're glad you enjoyed your stay. I believe for the beach chairs you meant to say, "...so no need to bring a set." The beach chair service that comes with this unit is very convenient and a great value! Hope to have you back with us soon.
My family and I had a wonderful time. The space was perfect from the views to the accommodations. All of the information and instruction from start to finish was seamless. You and your team made things very simple and pleasant. From check in to check out our family got to maximize our vacation time. Really enjoyed the beach chair rentals that came along with the rental. Thank you again for the opportunity with our limited time frame.
Just returned from a 5-day stay at this great condo, which was even better than the photos depict. We had everything we needed for a relaxing vacation. The kitchen is well-equipped, the TVs are really nice quality, and most importantly, the view from the balcony is simply awesome! We thoroughly enjoyed the beach chair service (included), and the Tiki Bar at this complex. We barely got in the car, as this condo is in walking distance to everything that Pier Park has to offer. We will definitely return!
We stayed with our family of four for 2 weeks and absolutely loved this condo. We felt right at home, everything was so clean and it was beautifully decorated. The view from the balcony is gorgeous and we loved watching the sunsets. Absolutely breathtaking! I`ve also never seen a nicer beach with white sand and we really appreciated the beach chairs that were included. The kids loved the pool and of course Pier Park which is just across the street. You have everything you need in walking distance. We `re definitely coming back next year!!
Perfect unit for beach vacation!
Wow what a great vacation! This unit is furnished so nicely and had everything we needed for a wonderful and relaxing stay at the beach. It was ultra clean and very well maintained. The view of the perfect green/blue water was incredible. We can't wait to come back here, and will definitely stay here again.
Thank you so much for allowing our family to stay at your condo. We frequently visit Panama City yearly, however, this was our first visit at Calyspo. Dave was amazing to work with, even without us giving a month's notice. The pictures shown are exactly what you are getting. With the except...the beds, including the sofa sleeper, are super comfortable!!! I will be booking ALL our stays with this condo. Thank you again for such a great Labor Day weekend!!!!
We greatly appreciate you taking the time to tell us about your stay at our condo. We try to give our renters an A+ experience. I am so glad we met your expectations!!! We look forward to you staying with us again.
Calypso Unit 116530-A FIVE STAR UNIT!
First of all I would like to thank the most amazing Dave and Jan for such an awesome, worry free vacation. My family and I were very pleased with this amazing condo! The pictures online portrayed a condo that was exactly what our family found when we checked in. This was the best unit, we believe in the entire complex. It was clean, well maintained, warm and inviting. We spent the week there and was sad when we had to leave. Working with the Evras family was like working with a family member of our own. Very professional, yet helpful and only a phone call away. Best of all, we were in walking distance from an amazing shopping area called Pier Park where there was shopping, movie theaters, restaurants and an area for kids to enjoy as well. I am so thankful that my browsing stopped on 116530, I will definetly return in the near future. Please do not look any further, this is the unit to rent!!!
Thank you so much for your kind words! We are so glad you enjoyed your vacation. That is truly our goal. We hope you can come and stay with us again!!!
Dave was so easy to work with! We were able to put together a beach trip by phone/fax less than 48 hours before arriving. He also gave a great local breakfast recommendation! The condo was exactly as pictured. Beautiful beachfront condo in walking distance of Pier Park! Can't wait for our next trip!
Our stay at your condo was everything we wanted and then some. It has all of the amentities of being "home" away from home. The decor is beautiful and bright. The reserved parking and the beach chair service made it even more appealing. We loved being across from Pier Park - very convenient to everything. We will be back and hopefully for a longer stay!
Thank you so much for taking the time to send a review. We are so glad you had a perfect stay! That is exactly what we strive for and we hope you can stay with us again.
Great vacation in a beautiful, clean and well maintained condo. This condo had gorgeous views of the ocean off of the balcony. This condo not only met, but exceeded all my expectations!!!
|
# -*- coding: utf-8 -*-
#
# DDS_Cpp_GPB_Tutorial build configuration file, created by
# ReST Editor on 27-Apr-2015
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
import time
# import liteconfig
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.todo']
# Add any paths that contain templates here, relative to this directory.
templates_path = [u'_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
source_encoding = u'utf-8-sig'
# The master toctree document.
master_doc = u'index'
# General information about the project.
project = u'OpenSplice GPB Tutorial'
this_year = time.strftime( '%Y' )
copyright = u'{y}, ADLINK Technology Limited'.format( y = this_year )
print 'Copyright string is:', copyright
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
#version = u's'
#version = liteconfig.version
version = u'6.x'
# The full version, including alpha/beta/rc tags.
#release = u's'
release = version
#release = u'00'
print 'Short version string is:', version
print 'Full version string is:', release
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
language = u'en'
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
# Force blank date with today = ' ' (space, not empty string)
today = ' '
# ***************
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = []
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
# html_theme = u'sphinxdoc'
html_theme = u'vortextheme'
# LINUX PATH:
html_theme_path = ['../../.']
# WINDOWS PATH:
# html_theme_path = ['..\..\.']
#build theme directory in lite using environment variable, so shared amongst books
# insight team can delete,
#html_theme_path = [os.environ['VL_HOME'] + '/build/docs']
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
html_title = 'OpenSplice GPB Tutorial'
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
#html_short_title = 'HTML short Title conf.py'
#html_short_title = ' '
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# ???????????????????????????????????????????????????????????????????
html_logo = './images/Vortex_logo_2014.png'
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
# html_static_path = []
html_static_path = []
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
html_show_sphinx = False
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'OpenSpliceGPBTutorial'
# -- Options for LaTeX output --------------------------------------------------
# The paper size ('letter' or 'a4').
latex_paper_size = u'a4'
# The font size ('10pt', '11pt' or '12pt').
latex_font_size = u'10pt'
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
# latex_documents = [('index', 'OpenSpliceGettingStartedGuide.tex', u'OpenSplice Getting Started Guide', u'', 'manual', True)]
latex_documents = [('index', 'OpenSplice_GPBTutorial.tex', u'OpenSplice GPB Tutorial', u'', 'manual', True)]
# ***************
# Note 'author' field empty
# Added 'True' to end of generated line to suppress 'Index & Tables'
# A dictionary that contains LaTeX snippets that override those Sphinx usually
# puts into the generated .tex files.
latex_elements = { 'babel': '\\usepackage[english]{babel}' }
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
latex_logo = 'images/Vortex-OpenSplice-Cover.png'
# ***************
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Additional stuff for the LaTeX preamble.
#latex_preamble = ''
# ***************
# * THIS GETS RID OF BLANK PAGES AT ENDS OF CHAPTERS & ToC
latex_elements = {
'classoptions': ',openany, oneside',
'babel': '\\usepackage[english]{babel}'
}
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [('index', 'OpenSpliceGPBTutorial', u'OpenSplice GPB Tutorial', [u'ADLINK Technology Limited'], 1)]
# * NOT TESTED
# -- Additional options --------------------------------------------------------
todo_include_todos = True
# * NOT TESTED
|
101. (M 15) Plutarchus. Adversus Coloten 20.
|
##### NEED TO FIX DIVISION OPERATOR!!!!!
from __future__ import division
#http://docs.python.org/release/2.2.3/whatsnew/node7.html
#The most controversial change in Python 2.2 heralds the start of an effort to fix an old design flaw that's been in Python from the beginning. Currently Python's division operator, /, behaves like C's division operator when presented with two integer arguments: it returns an integer result that's truncated down when there would be a fractional part. For example, 3/2 is 1, not 1.5, and (-1)/2 is -1, not -0.5. This means that the results of divison can vary unexpectedly depending on the type of the two operands and because Python is dynamically typed, it can be difficult to determine the possible types of the operands.
import time
from time import gmtime, strftime, localtime
import datetime
def now():
# now = strftime("%I:%M:%S %p", gmtime())
now = strftime("%A, %B %d, %Y @ %I:%M:%S %p", localtime())
return now
#G1 Code Object
class G1Code:
def __init__(self, X=0, Y=0, Z=0, F=0):
self.X =X
self.Y =Y
self.Z = Z
self.F = F
def __str__(self):
string = "G1 X" + str(self.X) + " Y" + str(self.Y) + " Z" + str(self.Z) + " F" + str(self.F)
return string
################## CHECK IF NUMBER IS EVEN OR ODD ###################
def checkIfEvenOrOdd(number):
if number%2==0:
return "even"
else:
return "odd"
class StartSintering:
def __init__(self, PWM=0, pause_msec=1000): #default to PWM=0 for safety
self.PWM = PWM
self.pause_msec = pause_msec
def __str__(self):
if self.pause_msec != 0:
string = "M128 S" + str(self.PWM) + " ; EXTRUSION pressure set\nG4 P4000 ; wait for 4 sec to get up to pressure\nM126 ; Start Extrusion at this PWM\nG4 P" + str(self.pause_msec) + " ; wait for " + str(self.pause_msec) + " milliseconds before movement\n"
else:
string = "M128 S" + str(self.PWM) + " ; EXTRUSION pressure set\nG4 P4000 ; wait for 4 sec to get up to pressure\nM126 ; Start Extrusion at this PWM WITH NO DELAY FOR NEXT MOVEMENT\n"
return string
class StopSintering:
def __init__(self, PWM=0, pause_msec=1000): #changed PWM=255 to PWM=0-- off better than full power.
self.PWM = PWM
self.pause_msec = pause_msec
def __str__(self):
if self.pause_msec != 0:
string = "M127 ; (Stop Sugar Extruding)\nM128 S255 ; (Vent EtoP and stop current)\nG4 P" + str(self.pause_msec) + " ; pause for " + str(self.pause_msec) + " milliseconds\n"
else:
string = "M127 ; (Stop Sugar Extruding)\nM128 S255 ; (Vent EtoP and stop current)\n"
return string
#DEFINE VARIABLES:
regularSpeed = 600 #mm/s? in/min?
defaultPWM = 30
# PHYSICAL REGION IN WHICH TO GENERATE LINES FOR TESTING POWER PARAMETER SPACE
SquareSize = 80
numLines = 10 #number of test lines to sinter in target area
ThisGCode = G1Code(X=0, Y=0, Z=0, F=regularSpeed)
SweetStart = StartSintering(PWM=defaultPWM)
SweetStop = StopSintering()
fname = "2d.gcode"
print "Preparing to output: " + fname
#Open the output f and paste on the "headers"
f = open(fname,"w")
f.writelines(";(***************SPEED/POWER TRAVERSALS*********************)\n")
f.writelines(";(*** " + str(now()) + " ***)\n")
#f.writelines(""";(Copyright 2012 Jordan Miller, jmil@rice.edu, All Rights Reserved)
#;(*** Using significantly modified/customized Marlin Firmware, RAMBo ***)
#M127 ; Laser Off
#M129 ; Laser PWM set to zero
#""")
f.writelines("G92 X0 Y0 Z0 ; you are now at 0,0,0\n")
f.writelines("""G90 ; absolute coordinates
;(***************End of Beginning*********************)
""")
# GO TO 0,0
#ThisGCode.X = 0
#ThisGCode.Y = 0
#ThisGCode.F = regularSpeed
#f.writelines(str(ThisGCode)+ "\n")
linSpacing = SquareSize/numLines
#laserPWM = 30
#for i in range(0, numLines):
# ThisGCode.X = 0
# ThisGCode.Y = i*linSpacing
# ThisGCode.F = regularSpeed
# f.writelines(str(ThisGCode) + "\n")
#messy, but functional:
#def G1(targetX, targetY, speed):
#cmd = "G1 X" + targetX + " Y" + targetY + " E" + speed
#return cmd
currX = 0
lineLength = 10
laserSpeed = 25 #mm/s
laserSpeed *= 60 #mm/min
linSpacing = 0.2
#for y in range(0,50,linSpacing):
# f.writelines("M701 S40\n")
#
# f.writelines("G1 X" + str(lineLength) + " Y" + str(y) + " F" + str(laserSpeed) + "\n") #give user heads up about the impending laser blast
# f.writelines("M701 S0\n")
# f.writelines("G1 X0 Y" + str(y+linSpacing) + " F5000\n\n")
# laserSpeed -= 60
for x in range(0,50,1):
f.writelines("M128 S100\n")
f.writelines("G1 X" + str(x/5) + " Y" + str(lineLength) + " F960" + "\n")
f.writelines("M128 S0\n")
f.writelines("G1 X" + str(x/5+linSpacing) + " Y0 F5000\n\n")
f.writelines("M128 S0 \n")
f.writelines("""
;(end of the file, shutdown routines)
M127 ; Laser Off
M701 S0 ; Laser PWM set to zero
M84 ; motors off
""")
f.close
|
The subconscious rules so much of our lives from behind the scenes. From the moment that the sperm meets the egg and begins to process of forming a life, you are governed by the subconscious- your mothers and your own. The subconscious is the force that tells your heart how to beat, when to blink, when to feel hungry.
This is not an easy answer. There is not one method that will make everyone’s subconscious change its old patterns to conform to the new paradigm that we wish to consciously install. I have decided to explore the methods that are either the most popular or the most effective to me personally that I do not see being explored as much as I think they should. Try them out, experiment and explore, and you will find something that works for you so long as you maintain an open mind.
Chances are, we all have heard about the many many benefits of meditation, so I won’t go on beating this dead horse for too long. All you need to know for these purposes is that when you meditate you blur the boundary of the conscious and unconscious and begin a sort of integration process.
|
import sys
from datetime import datetime
from xml.sax import make_parser, handler
# Type conversion helper functions
def parse_date(args):
return datetime.strptime(args, '%d.%m.%Y')
class ExportSaxParser(handler.ContentHandler):
"""Parses Shodan's export XML file and executes the callback for each
entry.
"""
# Callbacks
entry_cb = None
# Keep track of where we're at
_in_host = False
_in_data = False
_host = None
_data = u''
# Conversion schemas
_host_attr_schema = {
'port': int,
'updated': parse_date,
}
def __init__(self, entry_cb=None):
# Define the callbacks
self.entry_cb = entry_cb
# ContentHandler methods
def startElement(self, name, attrs):
if name =='host':
# Extract all the attribute information
self._host = {}
for (name, value) in attrs.items():
# Convert the field to a native type if it's defined in the schema
self._host[name] = self._host_attr_schema.get(name, lambda x: x)(value)
# Update the state machine
self._in_host = True
elif name == 'data':
self._in_data = True
self._data = u''
def endElement(self, name):
if name == 'host':
# Execute the callback
self.entry_cb(self._host)
# Update the state machine
self._in_host = False
elif name == 'data':
self._host['data'] = self._data
self._in_data = False
def characters(self, content):
if self._in_data:
self._data += content
class ExportParser(object):
entry_cb = None
def __init__(self, entry_cb=None):
self.entry_cb = entry_cb
def parse(self, filename):
parser = make_parser()
parser.setContentHandler(ExportSaxParser(self.entry_cb))
parser.parse(filename)
if __name__ == '__main__':
def test_cb(entry):
print entry
import sys
parser = ExportParser(test_cb)
parser.parse(sys.argv[1])
|
This is going to sound strange coming from an ex-recruiter but don’t let your LinkedIn profile read like a résumé. Many people think LinkedIn is site to showcase your resume and search for job opportunities, but it’s a lot more. It’s your opportunity to build your professional brand.
I was a reasonably early adopter (in Ireland) and the first iteration of my profile was basically a copy and paste of my résumé. Over time I’ve learned that what I needed to do is move away from thinking of my profile as my résumé and start to think of it as my online reputation and brand.
If you Google yourself, what’s the first link that appears? Nine times out of ten it’s your LinkedIn profile that will come top of the search results. Ask yourself “what does my LinkedIn profile say about me? Does it give my prospects and customers the impression that I am a competent professional, knowledgeable in my industry and can help solve a business problem or create value?” If it doesn’t then perhaps it’s time for a change.
Here’s my advice on how you can turn your online résumé into something far more powerful (and yes, recruiters will still be able to find you!).
Did you know that there are 45 million profile views every day on LinkedIn and you are 11 times more likely to have your profile viewed if you have a picture? One sure way of being ignored is to omit a photo. Think about the last time you went house / flat hunting on the web. Did you click through to any properties where there was no picture?
When you do add a photo, don’t be tempted to post one of a night out in the pub, scuba diving or holding a baby (even a very cute baby).A simple headshot set against a neutral background, looking approachable and smartly dressed is perfect. It doesn’t have to be professionally taken, but it should look professional.
Also, profiles with photos receive a 40% high InMail response rather because people like to see who they’re speaking to.
The first thing people read (after they’ve looked at your profile picture) is the headline. This appears directly under your name and if you haven’t customised it, it will have defaulted to your current job title. Why not stand out from the crowd a little? You can edit your headline and include up to 120 characters. My advice is to add your job title, current company and a tagline about how you help customers. Here are two of my colleagues who have done just that: Jochem Verbunt and Shannon Kato.
Your summary in essence the ‘story of you’ and again the key is to think of your audience when you write it. Try to avoid bullet points, make your summary more conversational. I recommend writing it in the first person and keep it relatively short, 3 – 4 paragraphs at most. To make yourself more easily found, include several keywords relating to your industry. That way you will surface more easily in searches.
You should give the viewer an insight into how you got to where you are today, what you currently do and if you have an example of how you’ve helped a customer it’s a great idea to include it here. It’s OK to inject a little bit of personality and passion. I love my colleague, Carmen Pop’s profile summary as you really get a sense of her personality when you read it.
If you are in sales, don’t be tempted to highlight that you crushed your sales targets year over year in your summary as it may scare off prospective customers. Instead move this to the “Honours and Awards” section.
Finally, if you feel comfortable, add a call to action. Invite the viewer to get in touch with you and be specific as to how they can do that. Including your contact details might result in a direct inbound lead.
Turn your profile into a mini marketing platform for your services or solutions by adding presentations, videos or whitepapers. You can add media beneath your summary or any of your experience.
Before you start to make sweeping changes to your LinkedIn profile, you may want to adjust your privacy settings first and switch off updates to your network every time you make a change. To do this, simply click on the thumbnail photo on the top right of your home page, select “Privacy & Settings” and then under Privacy Controls click on “Turn on/off your activity broadcasts”.
These are just a few pointers to get you started so you can really optimise your profile. For more tips download our Build your Sales Profile tip sheet, which has been especially created with sales professionals in mind or join me for a live Webinar on 25 September at 16:00 GMT / 08:00 PST called Transform Your Profile. Click here to register.
|
#!/usr/local/bin/python
# -*- coding: utf-8 -*-
import os
import time
from string import Template
import random
import re
import helpers.urls as urls
from IPython.core.display import HTML, display
from collections import Counter
path = os.path.dirname(os.path.abspath(__file__))
def create_wordcloud( data, plt, stopwords = ["the", "a", "or", "tai", "and", "ja", "to", "on", "in", "of", "for", "is", "i", "this"], width=850, height=350 ):
import codecs
html_template = Template( codecs.open( path + '/wordcloud.html', 'r').read() )
js_template = Template(codecs.open(path + '/wordcloud.js', 'r').read())
css_text = codecs.open(path + '/wordcloud.css', 'r').read()
texts = ""
for node in data:
text = encode_utf8(node['text_content']).lower()
## clean up: remove URLs non-alphabet characters
text = re.sub( urls.URL_REGEXP, ' ', text )
text = re.sub('[^a-zöä\s]+', ' ', text)
for word in text.split(" "): ## todo? should we use nltk?
if word not in stopwords:
texts += word + " "
frequencies = Counter(texts.split())
freqs_list = []
colors = ["#A5E6BA", "#9AC6C5", "#7785AC", "#248757", "#360568", "#F0544F", "#e07f9f", "#1d7059", "#3e6282"]
for key, value in frequencies.iteritems():
freqs_list.append({"text":encode_utf8(key),"size":str(value), "color": random.choice(colors)})
graph_div_id = int(time.time() * 1000)
js_text = js_template.substitute({'frequencies': str(freqs_list), 'graph_div_id': graph_div_id, 'width': width, 'height': height})
html_template = html_template.substitute({'js': js_text, 'graph_div_id': graph_div_id, 'css': css_text})
display( HTML( html_template ) )
return None
def encode_utf8( string ):
try:
return string.encode('utf8')
except UnicodeDecodeError:
return string
|
An orofacial myofunctional disorder (OMD) is when there is an abnormal lip, jaw, or tongue position during rest, swallowing or speech. You may also see this when there are prolonged oral habits, like thumb or finger sucking. Causes. OMD can be caused by: Upper airway obstruction.
|
"""
Flask-Goat
-------------
Flask-Goat is a plugin for security and user administration via GitHub OAuth2 & team structure within your organization.
"""
from setuptools import setup
setup(
name='Flask-Goat',
version='0.2.1',
url='http://incontextsolutions.github.io/flask-goat/',
license='MIT',
author='Tristan Wietsma',
author_email='tristan.wietsma@incontextsolutions.com',
description='Flask plugin for security and user administration via GitHub OAuth & organization',
long_description=__doc__,
packages=['flask_goat'],
zip_safe=False,
include_package_data=True,
platforms='any',
tests_requires=[
'coverage',
'nose',
'httmock',
'pep8',
'pyflakes',
],
install_requires=[
'Flask',
'redis',
'simplejson',
'requests',
],
classifiers=[
'Environment :: Web Environment',
'Intended Audience :: Developers',
'License :: OSI Approved :: MIT License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
'Topic :: Software Development :: Libraries :: Python Modules',
],
)
|
BIG FAT CAKE » BLOG » Winners!
Thank you to all who participated in our commenting contest. It was our first contest ever to encourage our audience to participate in discussion. We love your comments so please continue to share your thoughts, dreams, and suggestions.
Congratulations! You will receive your free t-shirt and water bottle soon.
This entry was posted in News, Special Offers and tagged commenting contest. Bookmark the permalink.
i received my shirt and bottle as well!!
|
import numpy as np
import pytest
import pytoolkit as tk
def test_to_str():
class_names = ["class00", "class01", "class02"]
y = tk.od.ObjectsAnnotation(
"path/to/dummy.jpg",
100,
100,
np.array([1, 2]),
np.array([[0.900, 0.900, 1.000, 1.000], [0.000, 0.000, 0.200, 0.200]]),
)
p = tk.od.ObjectsPrediction(
np.array([1, 0, 2]),
np.array([0.8, 0.1, 0.8]),
np.array(
[[0.900, 0.900, 1.000, 1.000], [0, 0, 1, 1], [0.000, 0.000, 0.200, 0.200]]
),
)
assert (
y.to_str(class_names)
== "(0, 0) [20 x 20]: class02\n(90, 90) [10 x 10]: class01"
)
assert (
p.to_str(100, 100, class_names, 0.5)
== "(0, 0) [20 x 20]: class02\n(90, 90) [10 x 10]: class01"
)
def test_plot_objects(data_dir, check_dir):
img_path = data_dir / "od" / "JPEGImages" / "無題.png"
class_names = ["~", "〇"]
ann = tk.od.ObjectsAnnotation(
img_path, 768, 614, [0], [[203 / 768, 255 / 614, 601 / 768, 355 / 614]]
)
img = tk.od.plot_objects(img_path, ann.classes, None, ann.bboxes, None)
tk.ndimage.save(check_dir / "plot_objects1.png", img)
img = tk.od.plot_objects(img_path, ann.classes, None, ann.bboxes, class_names)
tk.ndimage.save(check_dir / "plot_objects2.png", img)
img = tk.od.plot_objects(
img_path, ann.classes, np.array([0.5]), ann.bboxes, class_names
)
tk.ndimage.save(check_dir / "plot_objects3.png", img)
def test_iou():
bboxes_a = np.array([[0, 0, 200, 200], [1000, 1000, 1001, 1001]])
bboxes_b = np.array([[100, 100, 300, 300]])
iou = tk.od.compute_iou(bboxes_a, bboxes_b)
assert iou.shape == (2, 1)
assert iou[0, 0] == pytest.approx(100 * 100 / (200 * 200 * 2 - 100 * 100))
assert iou[1, 0] == 0
def test_is_in_box():
boxes_a = np.array([[100, 100, 300, 300]])
boxes_b = np.array(
[
[150, 150, 250, 250],
[100, 100, 300, 300],
[50, 50, 350, 350],
[150, 150, 350, 350],
]
)
is_in = tk.od.is_in_box(boxes_a, boxes_b)
assert is_in.shape == (len(boxes_a), len(boxes_b))
assert (is_in == [[False, True, True, False]]).all()
def test_od_accuracy():
y_true = np.tile(
np.array(
[
tk.od.ObjectsAnnotation(
path=".",
width=100,
height=100,
classes=[0, 1],
bboxes=[[0.00, 0.00, 0.05, 0.05], [0.25, 0.25, 0.75, 0.75]],
)
]
),
6,
)
y_pred = np.array(
[
# 一致
tk.od.ObjectsPrediction(
classes=[1, 0],
confs=[1, 1],
bboxes=[[0.25, 0.25, 0.75, 0.75], [0.00, 0.00, 0.05, 0.05]],
),
# conf低
tk.od.ObjectsPrediction(
classes=[1, 0, 0],
confs=[1, 0, 1],
bboxes=[
[0.25, 0.25, 0.75, 0.75],
[0.00, 0.00, 0.05, 0.05],
[0.00, 0.00, 0.05, 0.05],
],
),
# クラス違い
tk.od.ObjectsPrediction(
classes=[1, 1],
confs=[1, 1],
bboxes=[[0.25, 0.25, 0.75, 0.75], [0.00, 0.00, 0.05, 0.05]],
),
# 重複
tk.od.ObjectsPrediction(
classes=[1, 0, 0],
confs=[1, 1, 1],
bboxes=[
[0.25, 0.25, 0.75, 0.75],
[0.00, 0.00, 0.05, 0.05],
[0.00, 0.00, 0.05, 0.05],
],
),
# 不足
tk.od.ObjectsPrediction(
classes=[1], confs=[1], bboxes=[[0.25, 0.25, 0.75, 0.75]]
),
# IoU低
tk.od.ObjectsPrediction(
classes=[1, 0],
confs=[1, 1],
bboxes=[[0.25, 0.25, 0.75, 0.75], [0.90, 0.90, 0.95, 0.95]],
),
]
)
is_match_expected = [True, True, False, False, False, False]
for yt, yp, m in zip(y_true, y_pred, is_match_expected):
assert yp.is_match(yt.classes, yt.bboxes, conf_threshold=0.5) == m
assert tk.od.od_accuracy(y_true, y_pred, conf_threshold=0.5) == pytest.approx(2 / 6)
def test_confusion_matrix():
y_true = np.array([])
y_pred = np.array([])
cm_actual = tk.od.confusion_matrix(y_true, y_pred, num_classes=3)
assert (cm_actual == np.zeros((4, 4), dtype=np.int32)).all()
y_true = np.array(
[
tk.od.ObjectsAnnotation(
path=".",
width=100,
height=100,
classes=[1],
bboxes=[[0.25, 0.25, 0.75, 0.75]],
)
]
)
y_pred = np.array(
[
tk.od.ObjectsPrediction(
classes=[0, 2, 1, 1, 2],
confs=[0, 1, 1, 1, 1],
bboxes=[
[0.25, 0.25, 0.75, 0.75], # conf低
[0.25, 0.25, 0.75, 0.75], # クラス違い
[0.25, 0.25, 0.75, 0.75], # 検知
[0.25, 0.25, 0.75, 0.75], # 重複
[0.95, 0.95, 0.99, 0.99], # IoU低
],
)
]
)
cm_actual = tk.od.confusion_matrix(
y_true, y_pred, conf_threshold=0.5, num_classes=3
)
cm_expected = np.array(
[[0, 0, 0, 0], [0, 1, 0, 0], [0, 0, 0, 0], [0, 1, 2, 0]], dtype=np.int32
)
assert (cm_actual == cm_expected).all()
|
bKreativ Productions is a multi award winning production company , based out of Vancouver and Dubai . With a wealth of experience in the fields of Film, Design and Production , we kreate with an edge.
Our experience and passion has taken us all over the world winning awards in Los Angeles, India, Dubai, London and New York, and over the years our kreators have established relationships across various platforms and businesses within the entertainment, commercial and retail industries. We strive to make your dreams come to life, we are the ' Kreators of the Unknown ‘.
|
"""
===========================
Structural similarity index
===========================
When comparing images, the mean squared error (MSE)--while simple to
implement--is not highly indicative of perceived similarity. Structural
similarity aims to address this shortcoming by taking texture into account
[1]_, [2]_.
The example shows two modifications of the input image, each with the same MSE,
but with very different mean structural similarity indices.
.. [1] Zhou Wang; Bovik, A.C.; ,"Mean squared error: Love it or leave it? A new
look at Signal Fidelity Measures," Signal Processing Magazine, IEEE,
vol. 26, no. 1, pp. 98-117, Jan. 2009.
.. [2] Z. Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, "Image quality
assessment: From error visibility to structural similarity," IEEE
Transactions on Image Processing, vol. 13, no. 4, pp. 600-612,
Apr. 2004.
"""
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
from skimage import data, img_as_float
from skimage.measure import structural_similarity as ssim
matplotlib.rcParams['font.size'] = 9
img = img_as_float(data.camera())
rows, cols = img.shape
noise = np.ones_like(img) * 0.2 * (img.max() - img.min())
noise[np.random.random(size=noise.shape) > 0.5] *= -1
def mse(x, y):
return np.linalg.norm(x - y)
img_noise = img + noise
img_const = img + abs(noise)
fig, (ax0, ax1, ax2) = plt.subplots(nrows=1, ncols=3, figsize=(8, 4), sharex=True, sharey=True, subplot_kw={'adjustable':'box-forced'})
mse_none = mse(img, img)
ssim_none = ssim(img, img, dynamic_range=img.max() - img.min())
mse_noise = mse(img, img_noise)
ssim_noise = ssim(img, img_noise,
dynamic_range=img_const.max() - img_const.min())
mse_const = mse(img, img_const)
ssim_const = ssim(img, img_const,
dynamic_range=img_noise.max() - img_noise.min())
label = 'MSE: %2.f, SSIM: %.2f'
ax0.imshow(img, cmap=plt.cm.gray, vmin=0, vmax=1)
ax0.set_xlabel(label % (mse_none, ssim_none))
ax0.set_title('Original image')
ax1.imshow(img_noise, cmap=plt.cm.gray, vmin=0, vmax=1)
ax1.set_xlabel(label % (mse_noise, ssim_noise))
ax1.set_title('Image with noise')
ax2.imshow(img_const, cmap=plt.cm.gray, vmin=0, vmax=1)
ax2.set_xlabel(label % (mse_const, ssim_const))
ax2.set_title('Image plus constant')
plt.show()
|
The site 24rabota.kz's front page was last entered by gilturl 23-8-2010. The source was comprised of 235 characters, of which 105 are code, leaving an organic character count of 199. The total specialized word count on the 24rabota.kz home page is 171. We calculated that number of unique words is 8. We extracted 2010 top keywords, leaving 140% above the threshold. Total Phrases above the threshold was 2130.
|
#!/usr/bin/env python
__author__ = "Danelle Cline"
__copyright__ = "Copyright 2012, MBARI"
__license__ = "GPL"
__maintainer__ = "Danelle Cline"
__email__ = "dcline at mbari.org"
__status__ = "Development"
__doc__ = '''
This loader loads water sample data from the Southern California Coastal Ocean Observing System (SCCOS)
Harmful Algal Bloom project into the STOQS database. Each row is saved as a Sample, and each
sample measurement (column), e.g. nitrate, chlorophyll, etc. is saved as a Measurement.
To run the loader
1. Downloaded a csv from http://www.sccoos.org/query/?project=Harmful%20Algal%20Blooms&
Selecting some, or all desired measurements, and save to CSV file format
2. Create a stoqs database called stoqs_habs
3. Load with:
HABLoader.py <filename.csv> stoqs_habs
@undocumented: __doc__ parser
@author: __author__
@status: __status__
@license: __license__
'''
# Force lookup of models to THE specific stoqs module.
import os
import sys
os.environ['DJANGO_SETTINGS_MODULE']='settings'
project_dir = os.path.dirname(__file__)
# Add parent dir to pythonpath so that we can see the loaders and stoqs modules
sys.path.insert(0, os.path.join(os.path.dirname(__file__), "../") )
from django.conf import settings
from django.core.exceptions import ObjectDoesNotExist, MultipleObjectsReturned
from stoqs import models as m
from loaders import STOQS_Loader, SkipRecord
from datetime import datetime, timedelta
from pydap.model import BaseType
from django.contrib.gis.geos import fromstr, Point, LineString
import time
import numpy
import csv
import urllib2
import logging
from glob import glob
from tempfile import NamedTemporaryFile
import re
import pprint
import pytz
# Set up logging
logger = logging.getLogger('loaders')
logger.setLevel(logging.ERROR)
# When settings.DEBUG is True Django will fill up a hash with stats on every insert done to the database.
# "Monkey patch" the CursorWrapper to prevent this. Otherwise we can't load large amounts of data.
# See http://stackoverflow.com/questions/7768027/turn-off-sql-logging-while-keeping-settings-debug
from django.db.backends import BaseDatabaseWrapper
from django.db.backends.util import CursorWrapper
if settings.DEBUG:
BaseDatabaseWrapper.make_debug_cursor = lambda self, cursor: CursorWrapper(cursor, self)
class ClosestTimeNotFoundException(Exception):
pass
class SingleActivityNotFound(Exception):
pass
def get_closest_instantpoint(aName, tv, dbAlias):
'''
Start with a tolerance of 1 second and double it until we get a non-zero count,
get the values and find the closest one by finding the one with minimum absolute difference.
'''
tol = 1
num_timevalues = 0
logger.debug('Looking for tv = %s', tv)
while tol < 86400: # Fail if not found within 24 hours
qs = m.InstantPoint.objects.using(dbAlias).filter( activity__name__contains = aName,
timevalue__gte = (tv-timedelta(seconds=tol)),
timevalue__lte = (tv+timedelta(seconds=tol))
).order_by('timevalue')
if qs.count():
num_timevalues = qs.count()
break
tol = tol * 2
if not num_timevalues:
raise ClosestTimeNotFoundException
logger.debug('Found %d time values with tol = %d', num_timevalues, tol)
timevalues = [q.timevalue for q in qs]
logger.debug('timevalues = %s', timevalues)
i = 0
i_min = 0
secdiff = []
minsecdiff = tol
for t in timevalues:
secdiff.append(abs(t - tv).seconds)
if secdiff[i] < minsecdiff:
minsecdiff = secdiff[i]
i_min = i
logger.debug('i = %d, secdiff = %d', i, secdiff[i])
i = i + 1
logger.debug('i_min = %d', i_min)
return qs[i_min], secdiff[i_min]
class HABLoader(STOQS_Loader):
'''
Inherit database loding functions from STOQS_Loader and use its constructor
'''
parameter_dict={} # used to cache parameter objects
standard_names = {} # should be defined for each child class
include_names=[] # names to include, if set it is used in conjunction with ignored_names
# Note: if a name is both in include_names and ignored_names it is ignored.
ignored_names=[] # Should be defined for each child class
loaded = 0
mindepth = 8000.0
maxdepth = -8000.0
parmCount = {}
parameterCount = {}
def __init__(self, activityName, platformName, dbAlias='default', campaignName=None,
activitytypeName=None, platformColor=None, platformTypeName=None,
startDatetime=None, endDatetime=None, dataStartDatetime=None ):
'''
Build a set of standard names using the dataset.
The activity is saved, as all the data loaded will be a set of instantpoints
that use the specified activity.
@param activityName: A string describing this activity
@param platformName: A string that is the name of the platform. If that name for a Platform exists in the DB, it will be used.
@param platformColor: An RGB hex string represnting the color of the platform.
@param dbAlias: The name of the database alias as defined in settings.py
@param campaignName: A string describing the Campaign in which this activity belongs, If that name for a Campaign exists in the DB, it will be used.
@param activitytypeName: A string such as 'mooring deployment' or 'AUV mission' describing type of activity, If that name for a ActivityType exists in the DB, it will be used.
@param platformTypeName: A string describing the type of platform, e.g.: 'mooring', 'auv'. If that name for a PlatformType exists in the DB, it will be used.
@param startDatetime: A Python datetime.dateime object specifying the start date time of data to load
@param endDatetime: A Python datetime.dateime object specifying the end date time of data to load
@param dataStartDatetime: A Python datetime.dateime object specifying the start date time of data to append to an existing Activity
'''
self.campaignName = campaignName
self.activitytypeName = activitytypeName
self.platformName = platformName
self.platformColor = platformColor
self.dbAlias = dbAlias
self.platformTypeName = platformTypeName
self.activityName = activityName
self.startDatetime = startDatetime
self.endDatetime = endDatetime
self.dataStartDatetime = dataStartDatetime # For when we append data to an existing Activity
def initDB(self):
'''Do the intial Database activities that are required before the data are processed: getPlatorm and createActivity.
'''
self.platform = self.getPlatform(self.platformName, self.platformTypeName)
self.createActivity()
self.addParameters(self.ds)
#self.addResources()
def load_measurement(self, lat, lon, depth, time, parmNameValues):
'''
Load the data values recorded for each loaction
@parmNameValues is a list of 2-tuples of (ParameterName, Value) measured at the time and location specified by
@lat decimal degrees
@lon decimal degrees
@time Python datetime.datetime object
@depth in meters
'''
mt = None
try:
mt = self.createMeasurement(time = time,
depth = depth,
lat = lat,
long = lon)
logger.info("measurement._state.db = %s", mt._state.db)
if depth < self.mindepth:
self.mindepth = depth
if depth > self.maxdepth:
self.maxdepth = depth
except SkipRecord, e:
logger.info(e)
except Exception, e:
logger.error(e)
sys.exit(-1)
else:
logger.info("longitude = %s, latitude = %s, time = %s, depth = %s", lon, lat, time, depth)
for pn,value in parmNameValues:
logger.info("pn = %s", pn)
logger.info("parameter._state.db = %s", self.getParameterByName(pn)._state.db)
mp = m.MeasuredParameter(measurement = mt,
parameter = self.getParameterByName(pn),
datavalue = value)
try:
mp.save(using=self.dbAlias)
except ParameterNotFound:
logger.error("Unable to locate parameter for %s, skipping", pn)
continue
except Exception, e:
logger.error('Exception %s. Skipping this record.', e)
logger.error("Bad value (id=%(id)s) for %(pn)s = %(value)s", {'pn': pn, 'value': value, 'id': mp.pk})
continue
else:
self.loaded += 1
logger.info("Inserted value (id=%(id)s) for %(pn)s = %(value)s", {'pn': pn, 'value': value, 'id': mp.pk})
self.parmCount[pn] += 1
if self.parameterCount.has_key(self.getParameterByName(pn)):
self.parameterCount[self.getParameterByName(pn)] += 1
else:
self.parameterCount[self.getParameterByName(pn)] = 0
def load_sample(self, lon, lat, depth, timevalue, bottleName):
'''
Load a single water sample
'''
# Get the Activity from the Database
try:
activity = m.Activity.objects.using(self.dbAlias).get(name__contains=self.activityName)
logger.debug('Got activity = %s', activity)
except ObjectDoesNotExist:
logger.warn('Failed to find Activity with name like %s. Skipping load.', self.activityName)
return
except MultipleObjectsReturned:
logger.warn('Multiple objects returned for name__contains = %s. Selecting one by random and continuing...', self.activityName)
activity = m.Activity.objects.using(self.dbAlias).filter(name__contains=self.activityName)[0]
# Get or create SampleType
(sample_type, created) = m.SampleType.objects.using(self.dbAlias).get_or_create(name = 'Pier')
logger.debug('sampletype %s, created = %s', sample_type, created)
# Get or create SamplePurpose
(sample_purpose, created) = m.SamplePurpose.objects.using(self.dbAlias).get_or_create(name = 'StandardDepth')
logger.debug('samplepurpose %s, created = %s', sample_purpose, created)
try:
ip, seconds_diff = get_closest_instantpoint(self.activityName, timevalue, self.dbAlias)
point = 'POINT(%s %s)' % (lon, lat)
stuple = m.Sample.objects.using(self.dbAlias).get_or_create( name = bottleName,
depth = str(depth), # Must be str to convert to Decimal
geom = point,
instantpoint = ip,
sampletype = sample_type,
samplepurpose = sample_purpose,
volume = 20000.0
)
rtuple = m.Resource.objects.using(self.dbAlias).get_or_create( name = 'Seconds away from InstantPoint',
value = seconds_diff
)
# 2nd item of tuples will be True or False dependending on whether the object was created or gotten
logger.info('Loaded Sample %s with Resource: %s', stuple, rtuple)
except ClosestTimeNotFoundException:
logger.info('ClosestTimeNotFoundException: A match for %s not found for %s', timevalue, activity)
else:
logger.info('Loaded Bottle name = %s', bottleName)
def process_csv_file(self, fh):
'''
Iterate through lines of iterator to csv file and pull out data for loading into STOQS
'''
ds = {}
DA = BaseType()
DA.attributes = {'units': 'ng ml-1 ' ,
'long_name': 'Domoic Acid',
'standard_name': 'domoic_acid',
'type': 'float',
'description': 'Domoic acid' ,
'origin': 'www.sccoos.org' }
PD = BaseType()
PD.attributes = {'units': 'cells l-1',
'long_name': 'Pseudo-nitzschia delicatissima group',
'standard_name': 'pseudo_nitzschia_delicatissima',
'name': 'pseudo_nitzschia_delicatissima' ,
'type': 'float' ,
'description': 'Pseudo-nitzschia delicatissima group (cells/L)' ,
'origin': 'www.sccoos.org'
}
PA = BaseType()
PA.attributes = {'units': 'cells l-1',
'long_name': 'Pseudo-nitzschia seriata group',
'standard_name': 'pseudo_nitzschia_seriata',
'name': 'pseudo_nitzschia_seriata' ,
'type': 'float' ,
'description': 'Pseudo-nitzschia seriata group (cells/L)' ,
'origin': 'www.sccoos.org'
}
alexandrium = BaseType()
alexandrium.attributes = {'units': 'cells l-1',
'long_name': 'Alexandrium',
'standard_name': 'alexandrium',
'name': 'alexandrium' ,
'type': 'float' ,
'description': 'Alexandrium spp. (cells/L)' ,
'origin': 'www.sccoos.org'
}
phosphate = BaseType()
phosphate.attributes = {'units': 'm-3 mol l-1',
'long_name': 'Phosphate',
'standard_name': 'phosphate_dissolved_in_seawater',
'name': 'Phosphate' ,
'type': 'float' ,
'description': 'Phosphate (uM)' ,
'origin': 'www.sccoos.org'
}
ammonia = BaseType()
ammonia.attributes = {'units': 'm-3 mol l-1',
'long_name': 'Ammonia',
'standard_name': 'ammonia_dissolved_in_seawater',
'name': 'ammonia_dissolved_in_sewater' ,
'type': 'float' ,
'description': 'Ammonia (uM)' ,
'origin': 'www.sccoos.org'
}
silicate = BaseType()
silicate.attributes = {'units': 'm-3 mol l-1',
'long_name': 'Silicate',
'standard_name': 'silicate_dissolved_in_seawater',
'name': 'silicate_dissolved_in_seawater' ,
'type': 'float' ,
'description': 'Silicate (uM)' ,
'origin': 'www.sccoos.org'
}
chlorophyll = BaseType()
chlorophyll.attributes = {'units': 'kg m-3',
'long_name': 'Chlorophyll',
'standard_name': 'mass_concentration_of_chlorophyll_in_sea_water',
'name': 'mass_concentration_of_chlorophyll_in_sea_water' ,
'type': 'float' ,
'description': 'Chlorophyll (kg/m3)' ,
'origin': 'www.sccoos.org'
}
prorocentrum = BaseType()
prorocentrum.attributes = {'units': 'cells l-1',
'long_name': 'Prorocentrum',
'standard_name': 'mass_concentration_of_prorocentrum_in_sea_water',
'name': 'mass_concentration_of_prorocentrum_in_sea_water' ,
'type': 'float' ,
'description': 'Prorocentrum spp. (cells/L)' ,
'origin': 'www.sccoos.org'
}
self.ds = { 'Domoic Acid (ng/mL)': DA, 'Pseudo-nitzschia seriata group (cells/L)': PA,
'Pseudo-nitzschia delicatissima group (cells/L)': PD,
'Phosphate (uM)': phosphate,
'Silicate (uM)': silicate, 'Ammonia (uM)': ammonia,
'Chlorophyll (mg/m3)': chlorophyll, 'Chlorophyll 1 (mg/m3)': chlorophyll,
'Chlorophyll 2 (mg/m3)': chlorophyll ,
'Alexandrium spp. (cells/L)': alexandrium
}
self.include_names = ['Pseudo-nitzschia seriata group (cells/L)',
'Pseudo-nitzschia delicatissima group (cells/L)',
'Domoic Acid (ng/mL)',
'Chlorophyll (mg/m3)', 'Chlorophyll 1 (mg/m3)', 'Chlorophyll 2 (mg/m3)',
'Prorocentrum spp. (cells/L)', 'Silicate (uM)', 'Ammonia (uM)',
'Nitrate (uM)', 'Phosphate (uM)',
'Alexandrium spp. (cells/L)']
self.initDB()
for pn in self.include_names:
self.parmCount[pn] = 0
reader = csv.reader(fh)
for line in fh:
# Skip all lines that don't begin with '"' nor ' ' then open that with csv.DictReader
if not line.startswith('"') and not line.startswith(' '):
titles = reader.next()
reader = csv.DictReader(fh, titles)
for r in reader:
year = int(r['year'])
month = int(r['month'])
day = int(r['day'])
time = r['time']
lat = float(r['latitude'])
lon = float(r['longitude'])
depth = float(r['depth (m)'])
location = r['location']
hours = int(time.split(':')[0])
mins = int(time.split(':')[1])
secs = int(time.split(':')[2])
parmNameValues = []
for name in self.ds.keys():
if name.startswith('Chlorophyll'):
parmNameValues.append((name, 1e-5*float(r[name])))
else:
parmNameValues.append((name, float(r[name])))
# Check to make sure all data from this file are from the same location.
# The program could be modified to read data in one file from multiple locations by reading data into a hash keyed by location name
# and then stepping through each key of the hash saving the data for each location into it's own activity. For now just require
# each data file to have data from just one location.
try:
if lat != lastlat or lon != lastlon:
logger.error("lat and lon are not the same for location = %s and lastlocation = %s. The input data should have just one location." % (location, lastlocation))
sys.exit(-1)
except NameError, e:
# Expected first time through when lastlon & lastlat don't yet exist
pass
# Load data
dt = datetime(year, month, day, hours, mins, secs)
self.load_measurement(lon, lat, depth, dt, parmNameValues)
# Load sample
bName = dt.isoformat()
self.load_sample(lon, lat, depth, dt, bName)
lastlat = lat
lastlon = lon
lastlocation = location
logger.info("Data load complete, %d records loaded.", self.loaded)
fh.close()
# Update the Activity with information we now have following the load
# Careful with the structure of this comment. It is parsed in views.py to give some useful links in showActivities()
newComment = "%d MeasuredParameters loaded. Loaded on %sZ" % (self.loaded, datetime.utcnow())
logger.info("runHABLoader(): Updating its comment with newComment = %s", newComment)
aName = location
num_updated = m.Activity.objects.using(self.dbAlias).filter(id = self.activity.id).update(
name = aName,
comment = newComment,
maptrack = None,
mappoint = 'POINT(%s %s)' % (lon, lat),
mindepth = self.mindepth,
maxdepth = self.maxdepth,
num_measuredparameters = self.loaded,
loaded_date = datetime.utcnow())
self.updateActivityParameterStats(self.parameterCount)
self.updateCampaignStartEnd()
def process(self, file):
'''
Insert a Sample record to the database for each location in csv file.
Assumes that *.csv file exists on the local filesystem
*.csv file look like:
"Project: Harmful Algal Blooms"
"Calibrated: No"
"Requested start: 2011-10-05 13:00:00"
"Requested end: 2012-11-28 21:03:00"
"Request time: Wed, 28 Nov 2012 21:03:31 +0000"
"Other Notes: All times provided in UTC"
year,month,day,time,latitude,longitude,depth (m),location,Domoic Acid (ng/mL),Pseudo-nitzschia delicatissima group (cells/L),Pseudo-nitzschia seriata group (cells/L)
2011,10,05,13:00:00,36.958,-122.017,0.0,Santa Cruz Wharf,0.92 ,NaN,47200.0000
2011,10,12,12:55:00,36.958,-122.017,0.0,Santa Cruz Wharf,0.06 ,NaN,0.0000
2011,10,19,13:09:00,36.958,-122.017,0.0,Santa Cruz Wharf,0.26 ,NaN,13450.0000
2011,10,26,14:10:00,36.958,-122.017,0.0,Santa Cruz Wharf,0.05 ,NaN,900.0000
'''
fh = open(file)
try:
self.process_csv_file(fh)
except SingleActivityNotFound:
logger.error('Invalid csv file %s', file)
exit(-1)
if __name__ == '__main__':
_debug = True
try:
file = sys.argv[1]
except IndexError:
logger.error('Must specify csv file as first argument')
exit(-1)
try:
dbAlias = sys.argv[2]
except IndexError:
dbAlias = 'stoqs_habs'
#datetime.now(pytz.utc)
campaignName = 'SCCOS HABS 2011-2012'
activityName = 'Sample'
activitytypeName = 'WaterAnalysis'
platformName = 'Pier'
platformColor = '11665e'
platformTypeName = 'pier'
start = datetime(2011, 01, 01)
end = datetime(2012,12,31)
sl = HABLoader(activityName, platformName, dbAlias, campaignName, activitytypeName,platformColor, platformTypeName, start, end)
sl.process(file)
|
Gardeners have had a long time to get it right with Michaelmas daisies. In fact we have had 300 years yet we still complain about our Michaelmas daisies; too tall, floppy, mildew … Gerard’s Catalogue and Herbal of 1596-7 is the given date for the introduction of Aster amellus or Italian aster to Britain (p537 Hadfield’s 1936 Gardener’s Companion) and in “1710 Aster novi-belgii and A. novae-angliae ,parents of the modern Michaelmas daisy, (arrived) from N. America” (pp537-539 Hadfield 1936).
This explains the mid-nineteenth century description of Michaelmas daisies in The Flower Garden by E.S. Delamer “ – Great, straggling, gawky things, which would be discarded, but that they put forth flowers, in considerable variety of white, pink, purple, and blue, when almost everything else is in the sear and yellow leaf, and therefore are acceptable to help fill up bouquets.” Our friend, Eugene Sebastien Delamer, clearly doesn’t want them in his backyard.
Bought from Free Spirit Nursery, I have it planted in partial shade (facing north west) with the mauve Verbena bonariensis and next to my neighbour’s cedar trimmed into a ball. It is mildew resistant and beloved by bees and butterflies.
The Telegraph article mentioned earlier lists 6 of the best, another of which is my personal favourite Aster amellus ‘Veilchenkönigin’ which translates as Violet Queen. I first fell in love with the Violet Queen after seeing it in Christopher Lloyd’s book The Year at Great Dixter (1987), where he pairs this stocky short Italian aster with lipstick pink Nerine x bowdenii. The vibrant violet-blue colour and short stems of this aster make it ideal for overcast North Vancouver days and wetter weather, and as an added bonus, bees love the single flat blooms.
“It is a cross between Aster amellus (Italian aster or starwort) and A. thomsonii, this is an upright perennial with an ability to thrive in dry areas. A. thomsonii is a Himalayan species found on dry woodland edges. ‘Mönch’ was one of several deliberate crosses made by a Swiss nurseryman called (Carl Ludwig) Frikart around 1918 when he was just 29. He named all his crosses after Swiss mountain peaks. ‘Mönch’ is one of the best, and ‘Wunder von Stafa’ bred in 1924, is similar but with bluer flowers.” http://www.rhs.org.uk/Gardens/Harlow-Carr/About-Harlow-Carr/Plant-of-the-month/October/Aster-x-frikartii-Mönch There is also another of his crosses called the ‘Eiger’. I would recommend gently staking the monk and lifting and dividing it every three years.
There is another lovely slightly darker violet blue Aster amellus called King George which is shorter and less floppy than Monch.
|
# coding=utf-8
# --------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See License.txt in the project root for
# license information.
#
# Code generated by Microsoft (R) AutoRest Code Generator.
# Changes may cause incorrect behavior and will be lost if the code is
# regenerated.
# --------------------------------------------------------------------------
from msrest.serialization import Model
class PersonalizerError(Model):
"""The error object.
All required parameters must be populated in order to send to Azure.
:param code: Required. High level error code. Possible values include:
'BadRequest', 'ResourceNotFound', 'InternalServerError'
:type code: str or ~azure.cognitiveservices.personalizer.models.ErrorCode
:param message: Required. A message explaining the error reported by the
service.
:type message: str
:param target: Error source element.
:type target: str
:param details: An array of details about specific errors that led to this
reported error.
:type details:
list[~azure.cognitiveservices.personalizer.models.PersonalizerError]
:param inner_error: Finer error details.
:type inner_error:
~azure.cognitiveservices.personalizer.models.InternalError
"""
_validation = {
'code': {'required': True},
'message': {'required': True},
}
_attribute_map = {
'code': {'key': 'code', 'type': 'str'},
'message': {'key': 'message', 'type': 'str'},
'target': {'key': 'target', 'type': 'str'},
'details': {'key': 'details', 'type': '[PersonalizerError]'},
'inner_error': {'key': 'innerError', 'type': 'InternalError'},
}
def __init__(self, **kwargs):
super(PersonalizerError, self).__init__(**kwargs)
self.code = kwargs.get('code', None)
self.message = kwargs.get('message', None)
self.target = kwargs.get('target', None)
self.details = kwargs.get('details', None)
self.inner_error = kwargs.get('inner_error', None)
|
A unique design classic that is a perfect gift to give away as a gift that lasts forever.
Kay Bojesens Monkey was born in 1951 in Denmark and is made of teak and limb wood. It is a monkey with a wonderful personality that fits perfectly for birthdays and weddings. The monkey is a beloved design icon for both adults and children. Simply an appreciated friend for life. The design monkey is available in different sizes, mini, small, medium and large.
|
# Copyright 2011-2012 Free Software Foundation, Inc.
#
# This file is part of GNU Radio
#
# GNU Radio is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 3, or (at your option)
# any later version.
#
# GNU Radio is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with GNU Radio; see the file COPYING. If not, write to
# the Free Software Foundation, Inc., 51 Franklin Street,
# Boston, MA 02110-1301, USA.
import numpy
import extras_swig
#hacky way so we can import in-tree
try: import pmt
except ImportError: from gruel import pmt
for name in dir(extras_swig):
if 'pmt' in name:
setattr(pmt, name, getattr(extras_swig, name))
setattr(pmt, name.replace('ext_blob', 'blob'), getattr(extras_swig, name))
#this function knows how to convert an address to a numpy array
def __pointer_to_ndarray(addr, nitems):
dtype = numpy.dtype(numpy.uint8)
class array_like:
__array_interface__ = {
'data' : (int(addr), False),
'typestr' : dtype.base.str,
'descr' : dtype.base.descr,
'shape' : (nitems,) + dtype.shape,
'strides' : None,
'version' : 3
}
return numpy.asarray(array_like()).view(dtype.base)
#re-create the blob data functions, but yield a numpy array instead
def pmt_blob_data(blob):
return __pointer_to_ndarray(extras_swig.pmt_blob_rw_data(blob), extras_swig.pmt_blob_length(blob))
#re-create mgr acquire by calling into python GIL-safe version
class pmt_mgr:
def __init__(self): self._mgr = extras_swig.pmt_mgr()
def set(self, x): self._mgr.set(x)
def reset(self, x): self._mgr.reset(x)
def acquire(self, block = True): return extras_swig.pmt_mgr_acquire_safe(self._mgr, block)
#inject it into the pmt namespace
pmt.pmt_make_blob = extras_swig.pmt_make_blob
pmt.pmt_blob_data = pmt_blob_data #both call rw data, numpy isnt good with const void *
pmt.pmt_blob_rw_data = pmt_blob_data
pmt.pmt_blob_resize = extras_swig.pmt_blob_resize
pmt.pmt_mgr = pmt_mgr
|
The firm seeks to be independent, dynamic and thorough. These main objectives have contributed to the firm’s reputation as a leading law firm in very specialized areas of law.
Today, the three partners and their teams draw on their legal expertise and experience to find solutions that are both innovative and practical.
The firm is known for its ability to handle complex legal issues and provide clients support that is both thorough and creative. The emblematic matters that partners and their teams are involved in are evidence of the added value they bring to such transactions in each of their specialized practice areas.
For several years committed to contribute to civic values, Wilhelm & Associés has sponsored different cultural, social and academic projects.
With a deep understanding of its clients’ interests, the firm takes into account the specific issues and risks for each company in each of the matters it handles. This comprehensive approach allows the firm to identify from the outset the appropriate team and methods to put in place.
|
""" A simple example to test if NAO can connect to thingspeak.com.
"""
from naoqi import ALProxy
import httplib, urllib
import time
# NAO_IP = "mistcalf.local"
NAO_IP = "192.168.0.13"
tts = ALProxy("ALTextToSpeech", NAO_IP, 9559)
def main():
tts.say("I'm going to connect to the internet now!")
for i in range (5):
params = urllib.urlencode({'field1': i, 'key':'9PBYIQ1RWXJ6XZBO'}) # use your API key generated in the thingspeak channels for the value of 'key'
headers = {"Content-typZZe": "application/x-www-form-urlencoded","Accept": "text/plain"}
conn = httplib.HTTPConnection("api.thingspeak.com:80")
try:
conn.request("POST", "/update", params, headers)
response = conn.getresponse()
print i, msg
print response.status, response.reason
data = response.read()
conn.close()
except:
print "connection failed"
tts.say("I sent some stuff to the internet!")
# If using ThingSpeak web service there is a 15s limit. Install locally or on own webserver for faster usage.
time.sleep(16)
tts.say("Yay, all sent!")
if __name__ == "__main__":
main()
|
I am a health coach, international speaker and award winning author of "It Feels Good to Feel Good, learn to eliminate toxins, reduce inflammation and feel great again." available on Amazon. I do my best work with people struggling with chronic disease.
You don't need to live a life of pain and pills.
• People with autoimmune disease chronic illness. Parkensons, Alzheimers, Depression, skin issues, diabetes, cancer, heart disease, strokes, any type of inflammation.
• Moms who want to raise healthy families.
I am a no-nonsense, compassionate speaker, author and coach, to inspire others to take control of their own health and be proactive to live long and thrive. My focus is on toxins in our world that are making us ill. I am an autoimmune disease and chronic pain survivor.
I reversed much of my own inflammation and pain, have walked in your shoes, and can help you and hold you accountable to return to wellness.
Reduce the toxins in your life.
You should buy my book.
I teach you where the toxins are and what to replace them with.
Take your life back today. What are you waiting for?
I am on a mission to help people clean up their toxic load, whether they currently suffer from chronic pain, or are currently healthy and want to stay that way. I want to teach parents how to raise healthy children. I am a compassionate coach that can guide you back to health.
I am also available for speaking engagements on eliminating toxins for your health, the health of your family and the health of our earth. I have already done 50 radio/podcasts. I have spoken to cancer groups, stroke recovery groups, business groups, and churches and would love to speak to your group. Simple lifestyle changes can dramatically improve your health, the health of your family and the health of your employees. Contact me today.
My book, It Feels Good to Feel Good, learn to eliminate toxins, reduce inflammation and feel great again was published April 2017. It has won 12 awards .
|
"""
anaconda_mode
~~~~~~~~~~~~~
This is anaconda_mode autocompletion server.
:copyright: (c) 2013-2015 by Artem Malyshev.
:license: GPL3, see LICENSE for more details.
"""
from __future__ import (
absolute_import, unicode_literals, division, print_function)
import sys
from functools import wraps
from os.path import abspath, dirname
from pkg_resources import get_distribution, DistributionNotFound
from subprocess import Popen
project_path = dirname(abspath(__file__))
sys.path.insert(0, project_path)
missing_dependencies = []
try:
from jedi import Script, NotFoundError
except ImportError:
missing_dependencies.append('jedi')
try:
from service_factory import service_factory
except ImportError:
missing_dependencies.append('service_factory')
if missing_dependencies:
command = ['pip', 'install', '-t', project_path] + missing_dependencies
pip = Popen(command)
pip.communicate()
assert pip.returncode is 0, 'PyPi installation fails.'
from jedi import Script, NotFoundError
from service_factory import service_factory
print('Python executable:', sys.executable)
for package in ['jedi', 'service_factory']:
try:
version = get_distribution(package).version
except DistributionNotFound:
print('Unable to find {package} version'.format(package=package))
else:
print('{package} version: {version}'.format(
package=package, version=version))
def script_method(f):
"""Create jedi.Script instance and apply f to it."""
@wraps(f)
def wrapper(source, line, column, path):
try:
return f(Script(source, line, column, path))
except NotFoundError:
return []
return wrapper
@script_method
def complete(script):
"""Select auto-complete candidates for source position."""
def first_line(text):
"""Return text first line."""
return text.strip().split('\n', 1)[0]
return [{'name': comp.name,
'doc': comp.docstring() or None,
'info': first_line(comp.docstring(raw=True)) or None,
'type': comp.type,
'path': comp.module_path or None,
'line': comp.line}
for comp in script.completions()]
@script_method
def doc(script):
"""Documentation for all definitions at point."""
docs = ['\n'.join([d.module_name + ' - ' + d.description,
'=' * 40,
d.docstring() or "- No docstring -"]).strip()
for d in script.goto_definitions()]
return ('\n' + '-' * 40 + '\n').join(docs)
def process_definitions(f):
@wraps(f)
def wrapper(script):
cache = {script.path: script.source.splitlines()}
def get_description(d):
if d.module_path not in cache:
with open(d.module_path, 'r') as file:
cache[d.module_path] = file.read().splitlines()
return cache[d.module_path][d.line - 1]
return [{'line': d.line,
'column': d.column,
'name': d.name,
'description': get_description(d),
'module': d.module_name,
'type': d.type,
'path': d.module_path}
for d in f(script) if not d.in_builtin_module()]
return wrapper
@script_method
@process_definitions
def goto_definitions(script):
return script.goto_definitions()
@script_method
@process_definitions
def goto_assignments(script):
return script.goto_assignments()
@script_method
@process_definitions
def usages(script):
return script.usages()
@script_method
def eldoc(script):
"""Return eldoc format documentation string or ''."""
signatures = script.call_signatures()
if len(signatures) == 1:
sgn = signatures[0]
return {
'name': sgn.name,
'index': sgn.index,
'params': [p.description for p in sgn.params]
}
return {}
app = [complete, doc, goto_definitions, goto_assignments, usages, eldoc]
if __name__ == '__main__':
host = sys.argv[1] if len(sys.argv) == 2 else '127.0.0.1'
service_factory(app, host, 'auto', 'anaconda_mode port {port}')
|
The Icebug Metro2 BUGrip has the same ice-crushing capability of its predecessor with a slight cosmetic update to the sole. Sixteen fixed carbide studs create one of the most slip-resistant winter boots on the market, perfect for use on slick streets and snowy trails. Don't fear the falls this season; lace up the Icebug Metro2 BUGrip for comfortable, high tech protection on the worst winter days!
Looking for the Icebug Metro Women's?
Want to learn more about Slip Resistant Winter Boots? We have an awesome blog post that explains all of your options for high traction snow boots!
Most customers said runs true to size. Few customers said felt small.
Comfort Range 5°F to 50°F *Varies depending on cold tolerance, metabolism and activity level.
|
#!/usr/bin/env python3
import unittest
from django.test.client import RequestFactory
from varapp.samples.samples_factory import *
from varapp.constants.tests import NSAMPLES
from varapp.samples.samples_service import samples_selection_from_request
class TestSample(unittest.TestCase):
def test_sample(self):
s = Sample('A', sample_id=0)
self.assertEqual(s.sample_id, 0)
self.assertEqual(s.name, 'A')
def test_expose(self):
s = Sample('A', sample_id=0)
self.assertIsInstance(s.expose(), dict)
self.assertEqual(s.expose()['name'], 'A')
class TestSamplesSelectionFactory(unittest.TestCase):
def test_sample_factory(self):
"""Convert one Django `Samples` object to a `Sample`."""
django_s = Samples.objects.using('test').all()[0]
model_s = sample_factory(django_s)
self.assertIsInstance(model_s, Sample)
def test_samples_list_from_db(self):
samples = samples_list_from_db(db='test')
self.assertIsInstance(samples, list)
self.assertIsInstance(samples[0], Sample)
def test_samples_selection_factory(self):
"""Create a `SamplesSelection` from the database content."""
ss = samples_selection_factory(db='test')
self.assertIsInstance(ss, SamplesSelection)
self.assertEqual(list.__len__(ss.samples), NSAMPLES)
def test_samples_selection_from_samples_list(self):
"""Create a SamplesSelection from a list of `Sample`s."""
samples = [Sample('a'), Sample('b')]
ss = SamplesSelection(samples)
self.assertEqual(len(ss.samples), 2)
def test_samples_selection_from_request(self):
"""Create a SamplesSelection from groups dessribed in URL."""
request = RequestFactory().get('', [('samples','group1=09818'),
('samples','group2=09819'), ('samples','group3=09960,09961')])
ss = samples_selection_from_request(request, db='test')
self.assertIsInstance(ss, SamplesSelection)
if __name__ == '__main__':
unittest.main()
|
High Performance Heating Coil: Reach operational temperature in 6 seconds. REDLINK Intelligence: Temperature management system provides maximum life. Most compact electric heat gun: 6.35 long. Guarded Nozzle: Prevents contact with unprotected surfaces. Ladder Hook: Easily hang between applications.
The M18 Compact Heat Gun performs applications quickly by delivering the temperature of corded heat guns (1000°F). The high performance heating coil reaches operational temperature in less than 6 seconds, eliminating downtime. As the most compact heat in the market, it can go places where corded heat guns cant. The guarded nozzle and ladder hook prevent the tool from coming in contact with unprotected surfaces.
Powered by REDLITHIUM battery technology, it is able to heat over 40 connections on a single XC5.0 battery, anywhere, anytime without the hassle of cords. The M18 Compact Heat Gun features REDLINK PLUS Intelligence with a temperature management system to provide maximum life. The item "Milwaukee 2688-21 M18 18 Volt Cordless Heat Gun Kit with Battery and Charger" is in sale since Thursday, January 25, 2018. This item is in the category "Home & Garden\Tools & Workshop Equipment\Power Tools\Heat Guns". The seller is "xangussupplyx" and is located in Canton, Michigan. This item can be shipped to United States, Canada, United Kingdom, Denmark, Romania, Slovakia, Bulgaria, Czech republic, Finland, Hungary, Latvia, Lithuania, Malta, Estonia, Australia, Greece, Portugal, Cyprus, Slovenia, Japan, China, Sweden, South Korea, Indonesia, Taiwan, South africa, Thailand, Belgium, France, Hong Kong, Ireland, Netherlands, Poland, Spain, Italy, Germany, Austria, Bahamas, Israel, Mexico, New Zealand, Philippines, Singapore, Switzerland, Norway, Saudi arabia, Ukraine, United arab emirates, Qatar, Kuwait, Bahrain, Croatia, Malaysia, Brazil, Chile, Colombia, Costa rica, Panama, Trinidad and tobago, Guatemala, Honduras, Jamaica, Viet nam.
|
# MAKE SURE THAT THIS IS IN SAME DIRECTORY AS MASTER PYTHON FILE
import serial, sys
START_VAL = 0x7E
END_VAL = 0xE7
COM_BAUD = 57600
COM_TIMEOUT = 1
COM_PORT = 7
DMX_SIZE = 512
LABELS = {
'GET_WIDGET_PARAMETERS' :3, #unused
'SET_WIDGET_PARAMETERS' :4, #unused
'RX_DMX_PACKET' :5, #unused
'TX_DMX_PACKET' :6,
'TX_RDM_PACKET_REQUEST' :7, #unused
'RX_DMX_ON_CHANGE' :8, #unused
}
class DMXConnection(object):
def __init__(self, comport = None):
'''
On Windows, the only argument is the port number. On *nix, it's the path to the serial device.
For example:
DMXConnection(4) # Windows
DMXConnection('/dev/tty2') # Linux
DMXConnection("/dev/ttyUSB0") # Linux
'''
self.dmx_frame = [0] * DMX_SIZE
try:
self.com = serial.Serial(comport, baudrate = COM_BAUD, timeout = COM_TIMEOUT)
except:
com_name = 'COM%s' % (comport + 1) if type(comport) == int else comport
print "Could not open device %s. Quitting application." % com_name
sys.exit(0)
print "Opened %s." % (self.com.portstr)
def setChannel(self, chan, val, autorender = False):
'''
Takes channel and value arguments to set a channel level in the local
DMX frame, to be rendered the next time the render() method is called.
'''
if not 1 <= chan <= DMX_SIZE:
print 'Invalid channel specified: %s' % chan
return
# clamp value
val = max(0, min(val, 255))
self.dmx_frame[chan] = val
if autorender: self.render()
def clear(self, chan = 0):
'''
Clears all channels to zero. blackout.
With optional channel argument, clears only one channel.
'''
if chan == 0:
self.dmx_frame = [0] * DMX_SIZE
else:
self.dmx_frame[chan-1] = 0
def render(self):
''''
Updates the DMX output from the USB DMX Pro with the values from self.dmx_frame.
'''
packet = [
START_VAL,
LABELS['TX_DMX_PACKET'],
len(self.dmx_frame) & 0xFF,
(len(self.dmx_frame) >> 8) & 0xFF,
]
packet += self.dmx_frame
packet.append(END_VAL)
packet = map(chr, packet)
self.com.write(''.join(packet))
def close(self):
self.com.close()
|
Does Qualys support using Slack for notifications?
Does Qualys support using Slack for notifications (Scan complete, Scan results, Completed Map, Map Results, WAS Scan, etc.)?
This would be a useful capability as organizations move away from email and to other communications platforms with enhanced automation capabilities.
It seems not for now but if Qualys use it internally there may be an appetite to use it for customers.
Did you request it from your TAM ?
Right now it's email only.
|
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, You can obtain one at http://mozilla.org/MPL/2.0/.
import logging
import psycopg2
from socorro.external import DatabaseError, MissingArgumentError
from socorro.external.postgresql.base import PostgreSQLBase
from socorro.lib import external_common
logger = logging.getLogger("webapi")
class SkipList(PostgreSQLBase):
filters = [
("category", None, ["str"]),
("rule", None, ["str"]),
]
def get(self, **kwargs):
params = external_common.parse_arguments(self.filters, kwargs)
sql_params = []
sql = """
/* socorro.external.postgresql.skiplist.SkipList.get */
SELECT category,
rule
FROM skiplist
WHERE 1=1
"""
if params.category:
sql += 'AND category=%s'
sql_params.append(params.category)
if params.rule:
sql += 'AND rule=%s'
sql_params.append(params.rule)
# Use `UPPER()` to make the sort case insensitive
# which makes it more user-friendly on the UI later
sql += """
ORDER BY UPPER(category), UPPER(rule)
"""
error_message = "Failed to retrieve skip list data from PostgreSQL"
sql_results = self.query(sql, sql_params, error_message=error_message)
results = [dict(zip(("category", "rule"), x)) for x in sql_results]
return {'hits': results, 'total': len(results)}
def post(self, **kwargs):
params = external_common.parse_arguments(self.filters, kwargs)
if not params.category:
raise MissingArgumentError('category')
if not params.rule:
raise MissingArgumentError('rule')
sql = """
/* socorro.external.postgresql.skiplist.SkipList.post */
INSERT INTO skiplist (category, rule)
VALUES (%s, %s);
"""
sql_params = [params.category, params.rule]
connection = self.database.connection()
try:
with connection.cursor() as cur:
cur.execute(sql, sql_params)
connection.commit()
except psycopg2.Error:
connection.rollback()
error_message = "Failed updating skip list in PostgreSQL"
logger.error(error_message)
raise DatabaseError(error_message)
finally:
connection.close()
return True
def delete(self, **kwargs):
params = external_common.parse_arguments(self.filters, kwargs)
if not params.category:
raise MissingArgumentError('category')
if not params.rule:
raise MissingArgumentError('rule')
sql_params = [params.category, params.rule]
count_sql = """
/* socorro.external.postgresql.skiplist.SkipList.delete */
SELECT COUNT(*) FROM skiplist
WHERE category=%s AND rule=%s
"""
sql = """
/* socorro.external.postgresql.skiplist.SkipList.delete */
DELETE FROM skiplist
WHERE category=%s AND rule=%s
"""
connection = self.database.connection()
try:
cur = connection.cursor()
count = self.count(count_sql, sql_params, connection=connection)
if not count:
return False
cur.execute(sql, sql_params)
connection.commit()
except psycopg2.Error:
connection.rollback()
error_message = "Failed delete skip list in PostgreSQL"
logger.error(error_message)
raise DatabaseError(error_message)
finally:
connection.close()
return True
|
The Human Rights Campaign is honoring a pair of trailblazers in Houston this weekend.
Houston native Karamo Brown, one of the Fab 5 on Netflix's "Queer Eye" reboot, will receive the HRC Visibility Award at the 21st Annual HRC Houston Gala, 5 p.m. Saturday at the Marriott Marquis Houston (1777 Walker).
The award will be presented to Brown by actor and singer Jussie Smollett, who stars as Jamal Lyon on the hit series "Empire." Smollett released his debut album, "Sum of My Music," in March.
Tickets are on sale now and available until 5 p.m. Friday.
Brown plays the part of culture expert on "Queer Eye," which has been received enthusiastically by fans. He appeared on MTV's "The Real World: Philadelphia" in 2004, becoming the first out gay black man on reality TV.
HRC is the county's largest civil rights organization for LGBT equality.
|
from django.conf import settings
from django.shortcuts import get_object_or_404
from sales.models import Product
import logging
LOGGER = logging.getLogger(__name__)
class Cart(object):
"""Customers cart
It uses cookie NAME.
Cookie template is "product_id1::number1;product_id2::number2".
"""
NAME = "cart"
SEP_NUM, SEP_PR = '::', ';'
def __init__(self, request=None):
super(Cart, self).__init__()
self._request = request
self._storage = {}
def is_empty(self):
if self._request is None:
return True
try:
value = self._request.get_signed_cookie(
self.NAME,
None,
settings.COOKIE_SALT,
max_age=settings.COOKIE_EXPIRE
)
if not value:
return True
except KeyError:
return True
return False
def get(self, force=False):
if self._storage or self.is_empty():
return self._storage
try:
value = self._request.get_signed_cookie(
self.NAME,
None,
settings.COOKIE_SALT,
max_age=settings.COOKIE_EXPIRE
)
for pair in value.split(self.SEP_PR):
product_id, number = pair.split(self.SEP_NUM)
product = get_object_or_404(Product, pk=product_id)
self._storage[product] = number
except (KeyError, ValueError) as err:
LOGGER.error(err)
return self._storage
def set(self, response):
pairs = []
for product in self._storage:
pairs.append("{}{}{}".format(product.id, self.SEP_NUM, self._storage[product]))
value = self.SEP_PR.join(pairs)
response.set_signed_cookie(
self.NAME,
value,
settings.COOKIE_SALT,
max_age=settings.COOKIE_EXPIRE,
)
return response
def add_or_update(self, product, number, reset=False):
if reset:
products = {}
else:
products = self.get()
products[product] = number
self._storage = products
def delete(self, product):
products = self.get()
try:
products.pop(product)
except KeyError:
LOGGER.debug("ignore missing product")
self._storage = products
def count(self):
return len(self.get())
def total(self):
value = 0
products = self.get()
try:
for product in products:
value += product.price * int(products[product])
except ValueError:
return 0
return value
def has(self, product):
products = self.get()
if products.get(product):
return True
return False
def clean(self, response):
response.delete_cookie(self.NAME)
return response
|
Beautiful and spacious home, 3 bedrooms, 2 & 1/2 baths, formal living room, family room, and family loft that can be converted to a 4th bedroom. It is landscaped in the front and backyard. Perfect place to host family gatherings.
I was searching for a Property and found this listing (MLS® #806404). Please send me more information regarding 12394 Paseo Lindo, El Paso, TX, 79936. Thank you!
I'd like to request a showing of 12394 Paseo Lindo, El Paso, TX, 79936 (MLS® #806404). Thank you!
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.