text
stringlengths 29
850k
|
|---|
import pygame
import random
import math
from Settings import Settings
pygame.init()
clock = pygame.time.Clock()
def deg_to_rad(degrees):
return degrees / 180.0 * math.pi
# variables to control game states
home_state = 1
info_state = 2
game_state = 3
pause_state = 4
shop_state = 5
settings_state = 6
quit_state = 7
game_over_state = 8
reset_game_state = 9
# colours used
main_theme = (0, 100, 100)
red = (100, 0, 0)
white = (255, 255, 255)
# game settings
settings = Settings()
show_notes = True
star_chance = 10 # chance of a star is 1 in 10 every update
initial_stars = 100 # create 100 stars to fill the screen
package_chance = 10 # chance of a package drop is 1 in 10 every enemy kill
# if your game runs slow, set particle count low and star chance higher
# all particles, stars, enemies and bullets are removed when they leave the screen
width = 800
height = 500
main_s = pygame.display.set_mode((width, height))
main_s.fill((255, 255, 255))
pygame.display.set_caption("SPACE SHIPS")
pygame.display.set_icon(pygame.image.load("Images/Icon.png"))
font = pygame.font.Font("./Font/tall bolder.ttf", 15)
menu_font = pygame.font.Font("./Font/tall bolder.ttf", 25)
title_font = pygame.font.Font("./Font/tall bolder.ttf", 45)
screen_rect = pygame.sprite.Sprite()
# screen_rect is slightly bigger than the screen so that objects do not get removed
# when created off screen. many objects are checked as still on screen to remove it
# from a list that it is in
screen_rect.rect = pygame.Surface((width + 30, height + 30)).get_rect()
screen_rect.rect.x = 0
screen_rect.rect.y = 0
|
What is a pilot study and how do I do a good one? Professor Sandra Eldridge, Centre for Primary Care and Public Health, QMUL.
Pilot/feasibility studies are an essential part of trial preparation, particularly for the planning of complex interventions. However recent research has shown that pilot/feasibility studies suffer from publication bias and a lack of clarity in the objectives and methodological focus. Those of us who sit on panels for funding bodies have also noticed frequent misunderstandings about the purpose of pilot/feasibility studies when these are part of a funding application; opportunities to answer important research questions at the piloting/feasibility stage may be lost and as a result full trials may be less efficient, interventions may be less effective, and trials may run into serious problems with conduct and recruitment that could have been anticipated and allowed for with proper piloting.
There is currently a great deal of interest in this area. Over the past two years the group has been developing reporting guidelines for pilot/feasibility studies via a series of meetings, and a large Delphi study, and will be conducting a consensus meeting later in 2014.
In the course of this work the group has grappled with, amongst other things, the definition of pilot/feasibility studies and how to ensure such a study is well done. The seminar will briefly describe this work and the work of others in this area, the debate about definitions, and some do's and don'ts of pilot/feasibility studies.
Sandra is a Professor of Biostatistics at Queen Mary University of London. Since 2007 she has been joint lead (with Chris Griffiths) of the Centre for Primary Care and Public Health. She is also director of the Pragmatic Clinical Trials Unit housed within the Centre and joint lead for the Barts and The London arm of the Research Design Service.
Sandra was instrumental in setting up the Royal Statistical Society Primary Health Care Study Group in 2002. Her main research interests are cluster randomised trials and complex interventions, particularly in primary care. She has published a number of key papers and a recent book on cluster randomised trials. In addition to her methodological research she has responsibility for the statistical design and analysis aspects of a large number of collaborative research projects mostly concerned with the management of chronic conditions. Many of these projects use a cluster randomised trial design.
For further information, please contact Cath Williams (c.e.o.williams@exeter.ac.uk, 01392 726055).
|
#!/home/paulk/software/bin/python
from __future__ import division
from sys import argv,exit,stderr
import rpy2.robjects as R
from rpy2.robjects.packages import importr
qvalue = importr('qvalue')
cor = R.r['cor']
qunif = R.r['qunif']
runif = R.r['runif']
sort = R.r['sort']
try:
prefix = argv[1]
except IndexError:
print >> stderr,"Usage: %s [core|all]" % argv[0]
exit(1)
#assert prefix == 'core' or prefix == 'all'
if prefix == 'core': quniform = sort(qunif(runif(123266)))
elif prefix == 'full': quniform = sort(qunif(runif(131997)))
import fnmatch
import os
for file in os.listdir('permutation_tests'):
#if fnmatch.fnmatch(file,'%s_norm_ps.[0-9]*' % prefix):
if fnmatch.fnmatch(file,'%s_sanitised.paired.[0-9]*.out.tests' % prefix):
f = open('permutation_tests/'+file)
data = list()
data = [row.strip().split('\t')[3] for row in f]
data.sort()
data = R.FloatVector(map(float,data))
q = qvalue.qvalue(data)
f.close()
print file+"\t"+str(q[1][0])
# print file+"\t"+str(cor(data,quniform,method="pearson")[0])
|
Some House Democrats are still angry at House Minority Leader Nancy Pelosi (D-CA) for refusing to let Rep. Tammy Duckworth (D-IL) vote by proxy in leadership elections going on over the week.
Duckworth, an Iraq War veteran whose legs were amputated, is pregnant and she’s been advised by her doctors not to travel before her due date in December. She first asked to be allowed to cast a proxy vote during the leadership elections a week ago. Objections were raised to Duckworth’s requests and ultimately she was denied a proxy vote.
Proxy voting is prohibited under Democratic caucus rules.
Some Democrats, however, are still objecting.
“Our party should be the party that stands for women,” Democratic National Committee Chairwoman Debbie Wasserman Schultz (D-FL) said during a caucus meeting on Tuesday, according to Roll Call.
Pelosi has reportedly fallen on the other side of Wasserman Schultz over the issue.
On Monday at a news conference Pelosi defended her resistance to allowing Duckworth a proxy vote.
Duckworth hasn’t gone into detail into the internal fighting over her request but she acknowledged that her move to get a proxy vote was turned down late on Thursday.
Previous reporting over the proxy vote fighting has cited unnamed Democratic aides who said the real fuel behind it was related to the internal Democratic race for the top spot on the Energy and Commerce Committee between Rep. Anna Eshoo (D-CA) and Rep. Frank Pallone (D-NJ). Pallone was backed by House Minority Whip Steny Hoyer (D-MD) and Eshoo was backed by Pelosi. Duckworth supported Eshoo.
On Wednesday Pallone won the spot.
Correction: This post said Eshoo won the Energy and Commerce Committee seat. Pallone actually did. We regret the error.
|
from __future__ import unicode_literals
from __future__ import absolute_import, division, print_function
"""
This module provides an object cacheing framework for arbitrary Python values.
The intent is thatall cacghe logic can be isolated, and may be re-implemented
using a network cache faclity such as MemCache or Redis.
The present implementation assumes a single-process, multi-threaded environment
and interlocks cache accesses to avoid possible cache-related race conditions.
"""
__author__ = "Graham Klyne (GK@ACM.ORG)"
__copyright__ = "Copyright 2019, G. Klyne"
__license__ = "MIT (http://opensource.org/licenses/MIT)"
import logging
log = logging.getLogger(__name__)
import sys
import traceback
import threading
import contextlib
from annalist.exceptions import Annalist_Error
# ===================================================================
#
# Error class
#
# ===================================================================
class Cache_Error(Annalist_Error):
"""
Class for errors raised by cache methods.
"""
def __init__(self, value=None, msg="Cache_error (objectcache)"):
super(Cache_Error, self).__init__(value, msg)
return
# ===================================================================
#
# Cache creation and discovery
#
# ===================================================================
globalcachelock = threading.Lock() # Used to interlock creation/deletion of caches
objectcache_dict = {} # Initial empty set of object caches
objectcache_tb = {}
def get_cache(cachekey):
"""
This function locates or creates an object cache.
cachekey is a hashable value that uniquely identifies the required cache
(e.g. a string or URI).
Returns the requested cache object, which may be created on-the-fly.
"""
with globalcachelock:
if cachekey not in objectcache_dict:
objectcache_dict[cachekey] = ObjectCache(cachekey)
objectcache = objectcache_dict[cachekey] # Copy value while lock acquired
return objectcache
def remove_cache(cachekey):
"""
This function removes a cache from the set of object caches
cachekey is a hashable value that uniquely identifies the required cache
(e.g. a string or URI).
"""
# log.debug("objectcache.remove_cache %r"%(cachekey,))
objectcache = None
with globalcachelock:
if cachekey in objectcache_dict:
objectcache = objectcache_dict[cachekey]
del objectcache_dict[cachekey]
# Defer operations that acquire the cache local lock until
# the global lock is released
if objectcache:
objectcache.close()
return
# ===== @@@Unused functions follow; eventually, remove these? ====
def make_cache_unused_(cachekey):
"""
This function creates an object cache.
cachekey is a hashable value that uniquely identifies the required cache
(e.g. a string or URI).
Returns the created cache object.
"""
with globalcachelock:
try:
if cachekey in objectcache_dict:
raise Cache_Error(cachekey, msg="Cache already exists")
objectcache_dict[cachekey] = ObjectCache(cachekey)
objectcache_tb[cachekey] = traceback.extract_stack()
objectcache = objectcache_dict[cachekey] # Copy value while lock acquired
except Exception as e:
print("@@@@ Cache already exists", file=sys.stderr)
print("".join(traceback.format_list(objectcache_tb[cachekey])), file=sys.stderr)
print("@@@@", file=sys.stderr)
raise
return objectcache
def remove_all_caches_unused_():
"""
This function removes all caches from the set of object caches
"""
# log.debug("@@@@ remove_all_caches")
objectcaches = []
with globalcachelock:
for cachekey in objectcache_dict.keys():
# log.debug("@@@@ remove_all_caches %r"%(cachekey,))
objectcaches.append(objectcache_dict[cachekey])
del objectcache_dict[cachekey]
# Defer operations that acquire the cache local lock until
# the global lock is released
for objectcache in objectcaches:
objectcache.close()
return
def remove_matching_caches_unused_(match_fn):
"""
This function removes all caches whose cache key is matched by the provided
function from the set of object caches.
match_fn is a function that tests a supplied cache key, and returns
True if the corresponding cache is to be removed.
"""
objectcaches = []
with globalcachelock:
for cachekey in _find_matching_cache_keys(match_fn):
objectcaches.append(objectcache_dict[cachekey])
del objectcache_dict[cachekey]
# Defer operations that acquire the cache local lock until
# the global lock is released
for objectcache in objectcaches:
objectcache.close()
return
def _find_matching_cache_keys_unused_(match_fn):
"""
A generator that returns all cache keys matching the supplied function.
match_fn is a function that tests a supplied cache key, and returns
True if it mnatches some criterion.
"""
for cachekey in objectcache_dict.keys():
if match_fn(cachekey):
yield cachekey
return
# ===================================================================
#
# Object cache class
#
# ===================================================================
class ObjectCache(object):
"""
A class for caching objects of some type.
The cache is identified by is cache key value that is used to distinguish
a particular object cache from all others (see also `getCache`)
"""
def __init__(self, cachekey):
# log.debug("ObjectCache.__init__: cachekey %r"%(cachekey,))
self._cachekey = cachekey
self._cachelock = threading.Lock() # Allocate a lock object for this cache
self._cache = {} # Initial empty set of values
self._opened = traceback.extract_stack()
self._closed = None
return
def cache_key(self):
"""
Return cache key (e.g. for use with 'remove_cache')
"""
return self._cachekey
def flush(self):
"""
Remove all objects from cache.
"""
# log.debug("ObjectCache.flush: cachekey %r"%(self._cachekey,))
with self._cachelock:
for key in self._cache.keys():
del self._cache[key]
return self
def close(self):
"""
Close down this cache object. Once closed, it cannot be used again.
"""
# log.debug("ObjectCache.close: cachekey %r"%(self._cachekey,))
self.flush()
self._cachelock = None # Discard lock object
self._closed = traceback.extract_stack()
return
def set(self, key, value):
"""
Save object value in cache (overwriting any existing value for the key).
key is a hashable value that uniquely identifies the required cache
(e.g. a string or URI).
value is a (new) value that is to be associated with the key.
"""
with self._cachelock:
self._cache[key] = value
return value
def get(self, key, default=None):
"""
Retrieve object value from cache, or return default value
"""
if self._cachelock is None:
msg = "Access after cache closed (%r, %s)"%(self._cachekey, key)
log.error(msg)
log.debug("---- closed at:")
log.debug("".join(traceback.format_list(self._closed)))
log.debug("----")
raise Exception(msg)
# print("@@@@ self._cachelock %r, self._cachekey %r"%(self._cachelock, self._cachekey))
with self._cachelock:
value = self._cache.get(key, default)
return value
def pop(self, key, default=None):
"""
Remove object value from cache, return that or default value
"""
with self._cachelock:
value = self._cache.pop(key, default)
return value
@contextlib.contextmanager
def access(self, *keys):
"""
A context manager for interlocked access to a cached value.
The value bound by the context manager (for a 'with ... as' assignment) is a
dictionary that has entries for each of the valuyes in the supplied key values
for which there is a previously cached value.
On exit from the context manager, if the value under any of the given keys has
been changed, or if any new entries have been added, they are used to update the
cached values before the interlock is released.
Use like this:
with cacheobject.access("key1", "key2", ...) as value:
# value is dict of cached values for given keys
# interlocked processing code here
# updates to value are written back to cache on leavinbg context
See: https://docs.python.org/2/library/contextlib.html
"""
with self._cachelock:
value_dict = {}
for key in keys:
if key in self._cache:
value_dict[key] = self._cache[key]
yield value_dict
for key in value_dict:
self._cache[key] = value_dict[key]
# If needed: this logic removes keys deleted by yield code...
# for key in keys:
# if key not in value_dict:
# del self._cache[key]
return
# ===== @@@ UNUSED - remove this in due course =====
def find_unused_(self, key, load_fn, seed_value):
"""
Returns cached value for key, or calls the supplied function to obtain a
value, and caches and returns that value.
If a previously-cached value is present, the value returned is:
(False, old-value)
If a previously-cached value is not present, the function is called with
the supplied "seed_value" as a parameter, the resuling value is cached under
the supplied key, and the return value is:
(True, new-value)
"""
# log.debug("ObjectCache.find: cachekey %r, key %r"%(self._cachekey, key))
with self._cachelock:
if key in self._cache:
old_value = self._cache[key]
result = (False, old_value)
else:
new_value = load_fn(seed_value)
self._cache[key] = new_value
result = (True, new_value)
return result
# End.
|
In truth, An Occupation of Loss, as Simon’s eerie event is formally entitled, is only partially about international identities and the perils of belonging. More profoundly, it’s about the act of mourning: how we as a species respond to death, and how the emotions that rise up at times of passing can be turned into something tangible and solid. So solid, it can be packed into a suitcase and successfully transported across every international boundary.
Here’s what actually happens. You turn up at an address in Islington and discover a construction site that has been neatly boarded up: a new building, sitting there, unfinished. At exactly quarter to the hour, a door opens in the perimeter and you are ushered into a huge concrete vestibule that’s surprisingly circular, like an indoor bullring. From outside, you would never have suspected this was on the inside.
After some moments of communal puzzlement, shared with your fellow attendees, exactly on the hour a black-clad attendant appears and — having requested your silence for the duration of the piece — leads you down, down, down, into the bowels of the building. If the bullring upstairs was a surprise, the concrete arena at which you eventually arrive is positively shocking. It’s enormous. About the size of a Bond villain’s headquarters. Indeed, if you imagine what a Bond villain’s subterranean headquarters might look like once the bailiffs have stripped it back to bare concrete, you would have a pretty accurate idea of the scale of Simon’s futuristic Hades. Beneath the everyday surfaces of London, all this is going on.
A drum sounds. And at the bottom of the pit into which you have descended, a spooky line of international characters begins slowly filing through, like Noah’s animals disembarking from the ark: two men in black, one of whom has an accordion; two Buddhists in white; two black women with red shawls; a pair of shadowy figures in head-to-toe mourning robes; two women wrapped in keffiyehs. There’s something heavily ancient about their presence. And we urbanistas and metrosexuals in the audience are made immediately to feel paper-thin and new. At least, that’s what it did to me.
In the centre of the arena, soaring up to the roof, is a pair of neon columns between which the mysterious cast must march. As they do so, they momentarily disappear. It’s a trick of the light: a bit of technical magic achieved by the neon. But it creates a ghostly sense of transportation, as if we are stepping from one dimension into another.
That’s when the singing begins. And the wailing. And the thumping. And the sobbing. Tracking down the haunting noises to their source, you find groups of the mysterious visitors enacting strange and uncheery rituals in the concrete niches surrounding the underground pit. One pair beat their legs and wail in time to their own sobbing. Another, the Buddhists, sing their laments through electronic speakers. A third crouch down on a concrete shelf and beat their heads against the walls.
They turn out to be professional mourners, gathered from faraway corners of the globe — China, Azerbaijan, Ghana — and imported into this unlikely London setting to unleash their professional misery on us. In their homelands, their job is to add appropriate levels of despair to the passings for which they have been hired. Here, in the bowels of trendy London, their intense sobbing and wailing appears to critique the surrounding modernity, and accuse it of superficiality.
The laments continue for 30 minutes. Then, as abruptly as they started, they cease. A profound silence descends upon the concrete Hades into which we urban Orpheuses have been lured. As we leave, an elegant white package is thrust into our hands. It contains the visa applications made by the mourners Simon has brought over for her happening. It’s all there. The cost of the visas; the time and place of the interviews.
Thus the curt language of bureaucracy crushes the profound moods into which we have been plunged, like a steel toecap stamping on a butterfly.
Does anyone know the postcode for Downing Street? I’d like to send the PM a ticket to this event.
|
from queue import Queue
class ALGraph:
def __init__(self):
self.__graph = {}
def add_vertex_edge(self, v1, e, v2):
if v1 not in self.__graph:
self.__graph[v1] = []
else:
self.__graph[v1].append((v2, e))
def add_vertex_edge_undirected(self, v1, e, v2):
self.add_vertex_edge(v1, e, v2)
self.add_vertex_edge(v2, e, v1)
def get_adjacent(self, v):
return self.__graph[v]
def __iter__(self):
return iter(self.__graph.items())
def vertices(self):
return iter(self.__graph)
def __getitem__(self, item):
return self.get_adjacent(item)
def __len__(self):
return len(self.__graph)
def read_csv(self, file_path, vertex_class, edge_class, sep=','):
""" Reads a csv file with 3 columns (not checked)
that has a vertex, then a an edge label,
and a float weight"""
with open(file_path, 'r') as file:
movie_to_actors = {}
file.readline() # skip header
for line in file:
vertex_edge_weight = line.strip().split(sep)
vertex = vertex_class(vertex_edge_weight[0])
edge = edge_class(vertex_edge_weight[1], float(vertex_edge_weight[2]))
if edge not in movie_to_actors:
movie_to_actors[edge] = [vertex]
else:
movie_to_actors[edge].append(vertex)
for edge in movie_to_actors:
actors = movie_to_actors[edge]
for i in range(len(actors)):
for j in range(i + 1, len(actors)):
self.add_vertex_edge_undirected(actors[i], edge, actors[j])
def bfs(self, start, goal):
frontier = Queue()
frontier.put(start)
# v -> (pred, edge)
came_from = {}
while not frontier.empty():
current = frontier.get()
if current == goal:
break
for v, e in self.get_adjacent(current):
if v not in came_from:
frontier.put(v)
came_from[v] = (current, e)
result = []
current = goal
while current != start:
current, adj_edge = came_from[current]
result.append((current, adj_edge[current]))
result.reverse()
return start, result
|
Weeds with deep roots can infiltrate and damage the leach field.
The leach field of your septic system is the final step in filtering wastewater from your home. Normally only grass is allowed to grow over the field, but if weeds have taken over, you can kill them without digging into the soil. A glyphosate product such as Roundup kills the weeds, and everything else, down to the roots. While ready-to-use products are available, applying a Roundup concentrate is an economical solution to the weed problem.
Stop mowing two weeks before applying the Roundup, so the weeds are growing vigorously.
Wait until a clear, windless day when the temperatures are above 60 degrees Fahrenheit.
Put on safety gear, including long sleeves, long pants, chemical-resistant gloves, safety goggles and a breathing mask.
Pour the Roundup concentrate in a hose-end sprayer, setting the dial at 6 ounces per gallon. One quart of concentrate makes up to 10 gallons of solution, which treats an area up to 3,000 square feet.
Spray the Roundup onto the weeds over the leach field, wetting the leaves' top and bottom with the solution. Avoid spraying any desirable plants as Roundup kills all vegetation.
Wait up to two weeks to ensure that all the weeds are dead. Roundup is absorbed by the leaves of actively growing plants, killing them down to the roots. However, some plants are so vigorous that they require more than one application of glyphosate, the active ingredient in Roundup.
You can scatter grass seeds or flower seeds three days after applying Roundup; no residual will be left in the soil.
Avoid disturbing the soil over the leach field; you risk exposure to harmful bacteria and damaging the leach lines.
Only plant shallow-rooted grasses and flowers over a leach field. The roots of shrubs, trees and deep-rooted flowers may damage the septic system.
United States Environmental Protection Agency: The Basics -- What Is a Septic System?
de, Ruth. "How to Use Roundup on Leach Fields." Home Guides | SF Gate, http://homeguides.sfgate.com/use-roundup-leach-fields-86391.html. Accessed 22 April 2019.
How Much Water Do You Use With 48 Ounces of Roundup?
|
__author__ = 'rzaaeeff'
#Dependencies start
from requests import get
import json
#Finish
class FacebookLikeGrabber:
def __init__(self, access_token):
self.access_token_str = '?access_token=%s' % access_token
def get_likes_by_post_id(self, post_id):
"""
Facebook uses paging for like list.
That's why, i used special algorithm
to grab all users who like post.
"""
ended = True
post_likes = {}
response = get("https://graph.facebook.com/" + post_id + "/likes/" + self.access_token_str + '&limit=1000') #making query to graph.facebook.com | &limit=1000 is the number of maximum likes for each page
raw = json.loads(response.text) #response.text will give us string, but we need json
for item in raw['data']:
post_likes[item['name']] = item['id']
if 'next' in raw['paging']: #checking if there is next page
ended = False
while not ended:
response = get(raw['paging']['next'])
raw = json.loads(response.text)
for x in raw['data']:
post_likes[x['name']] = x['id']
if 'next' in raw['paging']:
ended = False
else:
ended = True
return post_likes
def get_posts_from_page(self, page_id, count):
post_ids = []
response = get("https://graph.facebook.com/" + page_id + "/statuses/" + self.access_token_str + '&limit=' + str(count))
raw = json.loads(response.text)
for post in raw['data']:
post_ids.append(str(post['id']))
return post_ids
|
Since I have not been buried in concrete, you can expect to see another post here soon. Have an excellent weekend!
When I first read your comment I thought maybe you were just being weird... but I "first glanced" it and sure enough, you're right!
This was way more fun than a bicycle infrastructure debate.
You have a good one too, Patrick!
Steve A has strange worries, how he gets a cement truck and dog poop mixed up is beyond reality: Birds fly.
Dog Fart! Welcome back, man! You haven't commented here since early days... although Steve's comment made sense to me. Birds fly? What? I love you, man.
Though, had the cement poured out, you could've struck a heroic pose before it dried, like those statues of generals on their warhorses.
OMG ... I have had this same experience.
Also, wow! You are able to control concrete trucks with the power of your mind (and your soothing, soothing, relaxation-tape-narrator voice)! Awesome!
|
"""
Wrapper module for sequential task (continuation) handling.
Use as fallback for _tasks_async when async/await is not available.
"""
from types import GeneratorType
class UpdateTask:
"Wrap a generator in a task-like interface"
def __init__(self, gen, desc):
"""
Arguments:
- gen: generator object
- desc<tuple>: (Resource, key)
"""
assert isinstance(gen, GeneratorType)
self._gen = gen
self._desc = desc
def __repr__(self):
res, pk = self._desc
return "<UpdateTask: ({}, {})>".format(res.tag, pk)
def cancel(self):
pass
def __iter__(self):
return self._gen
def send(self, x):
return self._gen.send(x)
def gather(jobs):
"Aggregate and collect jobs"
for job in jobs:
yield from job
def wrap_generator(func):
"Identity decorator (only needed for async compatibility)"
return func
def _consume_task_or_generator(item):
if isinstance(item, (GeneratorType, UpdateTask)):
return _consume_task(item)
else:
return item
def _consume_task(gen):
r, ret = None, []
while True:
try:
item = gen.send(r)
except StopIteration:
break
r = _consume_task_or_generator(item)
ret.append(r)
if len(ret) == 1:
return ret.pop()
return ret
def run_task(func):
"""
Decorator to collect and return generator results, returning a list
if there are multiple results
"""
def _wrapped(*a, **k):
gen = func(*a, **k)
return _consume_task(gen)
return _wrapped
|
In this article, I’m going to tell you what FindLaw, LexisNexis or your marketing firm won’t tell you. I’m going to show you how to attract an average of 3-7 new clients per week with just a few simple strategies. You will also learn what actions you need to take to increase your leads, reduce costs. Plus, if you ever wondered how some law firms attract multi-million dollar cases, it is simply because they have a lot of leads.
My name is Chance Hawkins and I am an online marketer based in Tulsa, Oklahoma. For years, I specialized in attorney marketing and have analyzed hundreds of law firm websites. One thing that I’ve learned is that for every attorney that dominates online, there are 25-50 that struggle.
The problem is that it ONLY pays to be #1.
…and getting to be king of the hill can be challenging. The determining factor is not the budget, or firm’s history, but rather the strategy.
To be at the top, you have to be hungry, aggressive and take the right steps within each channel of marketing. Only then, will you have so many leads that you can choose to work with the best clients and refer the rest.
The key to getting million dollar cases is to have too many leads. When you have more leads than your firm can handle, you get to be picky.
Why Attorneys fail to get new clients online?
Most attorneys would rather focus on legal maters than marketing, so they outsource.
This is where it get’s tricky.
The big law firm marketing agencies have multiple layers of staff, technology, and a reputation in the industry. So they charge fees that are 3-10X more than an average company. The average marketing agency costs $135/hr, but Law Firm specific agencies charge $350/hr – $600/hr. At these prices, only the top firms can afford to pay what is needed to compete. Therefore, they remain at the top and the firms that struggle, continue to struggle.
Another industry secret is that law firm specific agencies tend to have conflicts of interest. They will have clients within the same city that work within the same practice area…..
It’s seriously a messed up situation….
To get 3-7 new clients per week, you need 150-2000 new visitors on your website, social media channels or within your marketing campaigns. So the firs step is to get the required amount of traffic. It is a numbers game and an attorney needs to be top-of-mind. Once people are seeing your offer, they need to convert. The way to convert leads is to target the right people and use the right lead collection strategy. No one looks for an attorney without needing an attorney. So a Pay-Per-Click (PPC) model could work. However, it can be expensive and if you don’t have the budget or a high conversion rate, then it may not be worth it. Search engine optimization might be able to bring in the targeted traffic, but you will need a powerful conversion tool on your website to get your website visitors to take action. Social media can also be used to collect leads if you can build a community.
I’ll break down how you can get 3-7 new clients per month with PPC & SEO in this article and if you would like to know the other ways or exactly how this would look in practice, then reach out to me to let me know….
Create a landing page that focuses on the keyword and has one goal. Remove distractions and make sure your landing page has a call-to-action that is clear and effective. This may take some testing, so keep in mind that you may need a testing budget before you can constantly get the results you should expect.
Set up your Analytics so that you can analyze what is happening. I install software to record each visitor’s actions so that I can analyze what the visitor is doing on the website. This will help you with your targeting.
Test & Modify the campaign until the leads start pouring in.
See who is currently ranking #1 for the keywords you want to rank for.
Get an SEO evaluation on the top ranking pages and make sure you have the domain/page authority to compete.
Write a good article about the keyword you are targeting with a clear call-to-action.
Set up an interactive form to collect leads from the page.
Link to the page and SEO optimize the article so that it can compete.
Once your pages start to rank, you will get targeted traffic. The targeted traffic will convert if you do a good job with your content and lead form.
To beat the system, you must remove yourself from the system. Think about how you want to strategically compete and then be smart with the actions you take and what you choose to outsource.
For example, one thing that you might want to outsource in order to improve your search engine optimization and social media is content for your website. However, content creation doesn’t costs $135/hr to create (some firms charge $200/article). Instead, you would want to pay a variable rate. You could pay a reasonable rate for the content and then pay an expert to optimize the content. By unbundling these services, you will reduce the cost by at least 25%.
Another tactic to get more marketing for your money is to use a marketing firm for technology access. Instead of spending money on all of the programs, subscriptions and research tools you need to properly market your firm, you could share the cost with 20 other law firms. A bundled technology package could save you $500-$1,000 per month, while giving you access to powerful marketing tools.
Each of these tactics help a little, and I mentioned them to show that an attorney can increase his or her marketing power by thinking these things through. However, it is when you combine 100 tactics together that you experience the full benefits of smart marketing.
To get better cases, you need more leads. To get more leads, you need to implement smart marketing for your firm. It only pays to be #1, and if you don’t have the same budget as the firm’s that are currently at the top, then you need leverage. By using leverage, you can expect to get an average of 3-7 new clients per week on your website. It may take a couple of months or a couple of days to implement a strategic plan, but it will be worth it!
I hope you find all the leverage you need to get to the top. If you have any questions or if you would like to explore the possibility of working together, click the button below to start a conversation with me.
|
from .. import utilities
from .piece import DEFAULT_PIECE_ID
class SokobanPlusDataError(ValueError):
pass
class SokobanPlus:
"""
Manages Sokoban+ data for game board.
**Sokoban+ rules**
In this variant of game rules, each box and each goal on board get number tag
(color). Game objective changes slightly: board is considered solved only when
each goal is filled with box of the same tag. So, for example goal tagged with
number 9 must be filled with any box tagged with number 9.
Multiple boxes and goals may share same plus id, but the number of boxes with one
plus id must be equal to number of goals with that same plus id. There is also
default plus id that represents non tagged boxes and goals.
Sokoban+ ids for given board are defined by two strings called goalorder and
boxorder. For example, boxorder "13 24 3 122 1" would give plus_id = 13 to box id
= 1, plus_id = 24 to box ID = 2, etc...
**Valid Sokoban+ id sequences**
Boxorder and goalorder must define ids for equal number of boxes and goals. This
means that in case of boxorder assigning plus id "42" to two boxes, goalorder
must also contain number 42 twice.
Sokoban+ data parser accepts any positive integer as plus id.
Attributes:
DEFAULT_PLUS_ID: Sokoban+ ID for pieces that don't have one or when
Sokoban+ is disabled.
Original Sokoban+ implementation used number 99 for default plus ID. As
there can be more than 99 boxes on board, sokoenginepy changes this
detail and uses :const:`DEFAULT_PLUS_ID` as default plus ID. When loading
older puzzles with Sokoban+, legacy default value is converted
transparently.
Args:
boxorder (str): Space separated integers describing Sokoban+ IDs for boxes
goalorder (str): Space separated integers describing Sokoban+ IDs for goals
pieces_count (int): Total count of boxes/goals on board
"""
_LEGACY_DEFAULT_PLUS_ID = 99
DEFAULT_PLUS_ID = 0
def __init__(self, pieces_count, boxorder=None, goalorder=None):
self._is_enabled = False
self._is_validated = False
self.errors = []
self._pieces_count = pieces_count
self._box_plus_ids = None
self._goal_plus_ids = None
self._boxorder = None
self._goalorder = None
self.boxorder = boxorder or ""
self.goalorder = goalorder or ""
@classmethod
def is_sokoban_plus_string(cls, line):
return utilities.contains_only_digits_and_spaces(
line
) and not utilities.is_blank(line)
@classmethod
def is_valid_plus_id(cls, plus_id):
return isinstance(plus_id, int) and plus_id >= cls.DEFAULT_PLUS_ID
@property
def pieces_count(self):
return self._pieces_count
@pieces_count.setter
def pieces_count(self, rv):
if rv != self._pieces_count:
self.is_enabled = False
self._is_validated = False
self._pieces_count = int(rv)
@property
def boxorder(self):
if self.is_enabled and self.is_valid:
return self._rstrip_default_plus_ids(
" ".join(str(i) for i in self._box_plus_ids.values())
)
else:
return self._boxorder
@boxorder.setter
def boxorder(self, rv):
if rv != self._boxorder:
self.is_enabled = False
self._is_validated = False
self._boxorder = rv or ""
@property
def goalorder(self):
if self.is_enabled and self.is_valid:
return self._rstrip_default_plus_ids(
" ".join(str(i) for i in self._goal_plus_ids.values())
)
else:
return self._goalorder
@goalorder.setter
def goalorder(self, rv):
if rv != self._goalorder:
self.is_enabled = False
self._is_validated = False
self._goalorder = rv or ""
@property
def is_valid(self):
if self._is_validated:
return not self.errors
self.errors = []
try:
self._box_plus_ids = self._parse_and_clean_ids_string(self._boxorder)
self._goal_plus_ids = self._parse_and_clean_ids_string(self._goalorder)
except ValueError as exc:
self.errors.append(str(exc))
self._validate_plus_ids(self._box_plus_ids)
self._validate_plus_ids(self._goal_plus_ids)
self._validate_piece_count()
self._validate_ids_counts()
self._validate_id_sets_equality()
self._is_validated = True
return not self.errors
@property
def is_enabled(self):
return self._is_enabled
@is_enabled.setter
def is_enabled(self, value):
"""
Raises:
:exc:`SokobanPlusDataError`: Trying to enable invalid Sokoban+
"""
if value:
if not self.is_valid:
raise SokobanPlusDataError(self.errors)
self._is_enabled = value
def box_plus_id(self, for_box_id):
"""
Get Sokoban+ ID for box.
Args:
for_box_id (int): box ID
Returns:
int: If Sokoban+ is enabled returns Sokoban+ ID of a box. If not, returns
:const:`DEFAULT_PLUS_ID`
Raises:
:exc:`KeyError`: No box with ID ``for_box_id``, but only if i Sokoban+ is
enabled
"""
try:
return self._get_plus_id(for_box_id, from_where=self._box_plus_ids)
except KeyError:
raise KeyError("No box with ID: {0}".format(for_box_id))
def goal_plus_id(self, for_goal_id):
"""
Get Sokoban+ ID for goal.
Args:
for_goal_id (int): goal ID
Returns:
int: If Sokoban+ is enabled returns Sokoban+ ID of a goal. If not,
returns :const:`DEFAULT_PLUS_ID`
Raises:
:exc:`KeyError`: No goal with ID ``for_goal_id``, but only if Sokoban+ is
enabled
"""
try:
return self._get_plus_id(for_goal_id, from_where=self._goal_plus_ids)
except KeyError:
raise KeyError("No goal with ID: {0}".format(for_goal_id))
def _rstrip_default_plus_ids(self, plus_ids_str):
# TODO: Might not work correctly for "3 5 4 6 2 19" or "3 5 4 6 2 10"
if self.pieces_count < self._LEGACY_DEFAULT_PLUS_ID:
return plus_ids_str.rstrip(
str(self.DEFAULT_PLUS_ID) + " " + str(self._LEGACY_DEFAULT_PLUS_ID)
)
else:
return plus_ids_str.rstrip(str(self.DEFAULT_PLUS_ID) + " ")
def _get_plus_id(self, for_id, from_where):
if not self.is_enabled:
return self.DEFAULT_PLUS_ID
else:
return from_where[for_id]
def _parse_and_clean_ids_string(self, plus_ids_str):
"""
Safely replaces legacy default plus ids with default ones.
Returns:
dict: dict that maps piece IDs to piece Sokoban+ IDs
"""
def convert_or_raise(id_str):
try:
return int(id_str)
except ValueError:
raise SokobanPlusDataError(
"Can't parse Sokoban+ string! Illegal characters found. "
"Only digits and spaces allowed."
)
trimmed = [
convert_or_raise(id_str)
for id_str in self._rstrip_default_plus_ids(plus_ids_str).split()
]
cleaned = [
self.DEFAULT_PLUS_ID
if (
i == self._LEGACY_DEFAULT_PLUS_ID
and self.pieces_count < self._LEGACY_DEFAULT_PLUS_ID
)
else i
for i in trimmed
]
expanded = cleaned + [self.DEFAULT_PLUS_ID] * (self.pieces_count - len(cleaned))
retv = dict()
for index, plus_id in enumerate(expanded):
retv[DEFAULT_PIECE_ID + index] = plus_id
return retv
def _validate_plus_ids(self, ids):
if ids:
for i in ids.values():
if not self.is_valid_plus_id(i):
self.errors.append("Invalid Sokoban+ ID: {0}".format(i))
def _validate_piece_count(self):
if self.pieces_count < 0:
self.errors.append("Sokoban+ can't be applied to zero pieces count.")
def _validate_ids_counts(self):
if self._box_plus_ids and len(self._box_plus_ids) != self.pieces_count:
self.errors.append(
"Sokoban+ boxorder data doesn't contain same amount of IDs "
+ "as there are pieces on board! (pieces_count: {0})".format(
self.pieces_count
)
)
if self._goal_plus_ids and len(self._goal_plus_ids) != self.pieces_count:
self.errors.append(
"Sokoban+ goalorder data doesn't contain same amount of IDs "
+ "as there are pieces on board! (pieces_count: {0})".format(
self.pieces_count
)
)
def _validate_id_sets_equality(self):
if self._box_plus_ids:
boxes = set(
pid
for pid in self._box_plus_ids.values()
if pid != self.DEFAULT_PLUS_ID
)
else:
boxes = set()
if self._goal_plus_ids:
goals = set(
pid
for pid in self._goal_plus_ids.values()
if pid != self.DEFAULT_PLUS_ID
)
else:
goals = set()
if boxes != goals:
self.errors.append(
"Sokoban+ data doesn't define equal sets of IDs for "
+ "boxes and goals"
)
def __repr__(self):
return (
"SokobanPlus("
+ "pieces_count={0}, boxorder='{1}', goalorder='{2}')".format(
self.pieces_count, self.boxorder, self.goalorder
)
)
def __str__(self):
return (
"SokobanPlus("
+ "pieces_count={0}, boxorder='{1}', goalorder='{2}')".format(
self.pieces_count, self.boxorder, self.goalorder
)
)
|
On 2 October 1992 a rebellion erupted in the Casa de Detencao prison in Sao Paulo. Shock troops of military police stormed the prison to quell the rebellion. Eleven hours later, 111 prisoners were dead. In this paper, accounts of the disturbance and massacre are given from the perspectives of the police, the prisoners and the judges who visited the scene. AI also has concerns about the treatment of the wounded and about withholding of information from and abuse of familes of the dead and injured. Official investigations into the incident are discussed and AI's concerns about the forensic evidence are described. Information is given about the prison structure and about the police, including the problem of failure to prosecute for alleged extrajudical executions.
|
from django.http import HttpResponse
from django.views.decorators.csrf import csrf_exempt
from rest_framework.renderers import JSONRenderer
from rest_framework.parsers import JSONParser
from rest_framework import status
from .models import Empresa, Calificacion
from .serializers import EmpresaSerializer, CalificacionSerializer
from django.shortcuts import render
class JSONResponse(HttpResponse):
"""
An HttpResponse that renders its content into JSON.
"""
def __init__(self, data, **kwargs):
content = JSONRenderer().render(data)
kwargs['content_type'] = 'application/json'
super(JSONResponse, self).__init__(content, **kwargs)
def index(request):
return render(request, 'apprest/index.html')
@csrf_exempt
def lista_empresas(request):
"""
List las empresas, o crea una
"""
if request.method == 'GET':
empresas = Empresa.objects.all()
serializer = EmpresaSerializer(empresas, many=True)
return JSONResponse(serializer.data)
elif request.method == 'POST':
data = JSONParser().parse(request)
serializer = EmpresaSerializer(data=data)
if serializer.is_valid():
serializer.save()
return JSONResponse(serializer.data, status=201)
return JSONResponse(serializer.errors, status=400)
@csrf_exempt
def empresa(request, pk):
try:
empresa = Empresa.objects.get(pk=pk)
except Empresa.DoesNotExist:
return JSONResponse(status=status.HTTP_404_NOT_FOUND)
if request.method == 'GET':
calificaciones = empresa.calificacion_set.all()
serializer = CalificacionSerializer(calificaciones, many=True)
return JSONResponse(serializer.data)
elif request.method == 'DELETE':
empresa.delete()
return JSONResponse(status=status.HTTP_204_NO_CONTENT)
|
Today:- Gurgaon,Oct- Former beauty queen Neha Dhupia says she’s not “that kind of a person” who propagates size zero.
Neha, now an actress, will soon be seen anchoring and co-judging the “Kingfisher Supermodels 3” show. She will also be mentoring 10 supermodels who would battle it out through 20 episodes of tasks and challenges.
Asked if size zero is still a major issue?
“No matter how I answer it, it will sound like I am propagating size zero and that is not true. I am not that kind of person at all. The only thing I am propagating is when your professional requirement is that you have to have a certain body type, you need to have that body type,” Neha told IANS here.
“When it comes to being a model, you have to look good in clothes that the designers give you. It isn’t that the clothes should wear you. Designers look at models who can carry off their clothes,” she added.
“Kingfisher Supermodels 3” will go on air from Saturday on NDTV Good Times.
|
__author__ = 'Ivan Mahonin'
from gettext import gettext as _
from argparse import ArgumentParser
from http.server import BaseHTTPRequestHandler, HTTPServer
from urllib.parse import parse_qs
from urllib.parse import unquote
from urllib.parse import urlparse
import os.path
import json
from renderchan.core import RenderChan
class RenderChanHTTPRequestHandler(BaseHTTPRequestHandler):
def do_GET(self):
parsed_url = urlparse(self.path)
args = parse_qs(parsed_url.query)
for key in args.keys():
args[key] = args[key][-1]
parsed_url_path = unquote(parsed_url.path)
while len(parsed_url_path) and (parsed_url_path[0] == '/' or parsed_url_path[0] == '\\'):
parsed_url_path = parsed_url_path[1:]
filename = os.path.abspath(os.path.join(self.server.renderchan_rootdir, parsed_url_path))
renderchan = RenderChan()
renderchan.datadir = self.server.renderchan_datadir
renderchan.track = True
renderchan.dry_run = True
if "dryRun" in args:
renderchan.dry_run = bool(args["dryRun"])
if "profile" in args:
renderchan.setProfile(str(args["profile"]))
if "renderfarmType" in args and str(args["renderfarmType"]) in renderchan.available_renderfarm_engines:
renderchan.renderfarm_engine = str(args["renderfarmType"])
if "host" in args:
if renderchan.renderfarm_engine in ("puli"):
renderchan.setHost(str(args["host"]))
else:
print("WARNING: The --host parameter cannot be set for this type of renderfarm.")
if "port" in args:
if renderchan.renderfarm_engine in ("puli"):
renderchan.setPort(int(args["port"]))
else:
print("WARNING: The --port parameter cannot be set for this type of renderfarm.")
if "cgru_location" in args:
renderchan.cgru_location = str(args["cgru_location"])
if "snapshot_to" in args:
renderchan.snapshot_path = str(args["snapshot_to"])
if "force" in args:
renderchan.force = bool(args["force"])
if "force_proxy" in args:
renderchan.force_proxy = bool(args["force_proxy"])
error = renderchan.submit('render', filename, bool(args.get("dependenciesOnly")), bool(args.get("allocateOnly")), str(args.get("stereo")))
reply = {}
if error:
reply["error"] = error
self.send_response(200)
self.send_header("Content-type", "application/json")
self.end_headers()
reply["files"] = [];
for file in renderchan.trackedFiles.values():
if file["source"][0:len(self.server.renderchan_rootdir)] == self.server.renderchan_rootdir:
file["source"] = file["source"][len(self.server.renderchan_rootdir):]
reply["files"].append( file )
self.wfile.write(bytes(json.dumps(reply, self.wfile), "UTF-8"))
def process_args():
parser = ArgumentParser(description=_("Run RenderChan HTTP-server."),
epilog=_("For more information about RenderChan, visit https://morevnaproject.org/renderchan/"))
parser.add_argument("--host", dest="host",
action="store",
default="",
help=_("Set HTTP-server host."))
parser.add_argument("--port", dest="port",
type=int,
action="store",
default=80,
help=_("Set HTTP-server port."))
parser.add_argument("--root", dest="root",
action="store",
default=".",
help=_("Set HTTP-server root directory."))
return parser.parse_args()
def main(datadir, argv):
args = process_args()
server = HTTPServer((args.host, args.port), RenderChanHTTPRequestHandler)
server.renderchan_datadir = datadir
server.renderchan_rootdir = os.path.abspath(args.root)
print("Starting RenderChan HTTP-server at " + args.host + ":" + str(args.port))
server.serve_forever()
|
(CNN) -- As a spending bill loaded with pork makes its way through Congress, President Obama is getting pushback from members of his own party who are questioning his vow to end wasteful spending.
The Senate could vote on the spending bill as early as Thursday.
The president on Wednesday pledged turn tide on an "era of fiscal irresponsibility," reiterating his campaign promise that the days of "pork ... as a strategy" are over.
And in a prime-time address before a joint session of Congress, Obama last week praised the $787 billion stimulus package signed into law, telling the nation, "I'm proud that we passed a recovery plan free of earmarks, and I want to pass a budget next year that ensures that each dollar we spend reflects only our most important national priorities."
But some in the audience found that hard to swallow.
"There was just a roar of laughter -- because there were earmarks," said Sen. Claire McCaskill, D-Missouri.
The scoffing continues as the president hammers away at reducing wasteful spending and saving taxpayers money while lawmakers on Capitol Hill load up a spending bill with more than 8,000 earmarks totaling nearly $8 billion.
The legislation in question is a $410 billion omnibus bill that would keep the federal government running through the rest of the fiscal year, which ends in September 2009.
About 60 percent of the earmarks are from Democrats, and about 40 percent are from Republicans, according to Taxpayers for Common Sense.
Ryan Alexander, the president of the Taxpayers for Common Sense, pointed out that not all earmarks are bad.
"They're not always good or bad. What's bad is the process. We don't know why certain projects get earmark funds and why other projects don't. Some of them may be good. But that could be just as well by accident as it is by design, because we have no idea why these projects are funded and why other projects aren't," he said.
"But the bloated omnibus requires sacrifice from no one, least of all the government. It only exacerbates the problem and hastens the day of reckoning," Bayh wrote in a Wall Street Journal editorial published Wednesday.
Democrats blocked amendments by Sens. John McCain, R-Arizona, and Tom Coburn, R-Oklahoma, that would have narrowed the spending on earmarks.
"So much for the promise of change. This may be -- in all the years I have been coming to this floor to complain about the earmark pork barrel corruption that this system has bred, this may be probably the worst, probably the worst," McCain said Tuesday.
The spending bill made it through the House last week. A vote in the Senate could come as early as Thursday, but it's unclear if there are the 60 votes necessary to sent it to the floor since some Democrats aren't supporting it.
Obama is expected to sign the bill when it reaches his desk.
But Democrats speaking out against the pork could just be flexing their muscles, said CNN contributor Roland Martin.
"I would love to see these same Democrats have the courage to actually stand up, look their fellow senators in the eye, Democrats and Republicans, and say, OK, let's get rid of your particular project," he said.
"What often happens in Congress is, they complain in terms of the general ... What I am saying is, call them out. Put it on the table," he said.
Those defending the earmarks say they make up just a small portion -- less than 1 percent -- of the overall bill.
House Speaker Nancy Pelosi, D-California, on Thursday defended congressional earmarks, despite calls from the White House earlier this week to reform the process.
"The legislatively initiated proposals in the appropriations bill, I think, are an appropriate function of Congress of the United States," Pelosi said.
But Pelosi said she does believe Congress should cut back on the number of earmarks.
Pelosi said after Congress finishes the $410 billion spending bill for this year, she planned to work with the Obama administration to find ways to reduce the number of earmarks in future spending bills.
The White House says this bill is just last year's unfinished business -- and next time, it will be different.
"We'll change the rules going forward," White House Press Secretary Robert Gibbs said Wednesday when asked about the legislation.
Obama presented his budget summary to Congress last week, but the full details of his 2010 budget won't be available until April.
CNN's Jason Carroll, Joe Johns, Kristi Keck and Deirdre Walsh contributed to this report.
|
from __future__ import absolute_import, print_function
from datetime import timedelta
from django.utils import timezone
from sentry.models import (
Activity, Group, GroupAssignee, GroupBookmark, GroupSeen, GroupSnooze,
GroupStatus, GroupTagValue, Release
)
from sentry.testutils import APITestCase
class GroupDetailsTest(APITestCase):
def test_simple(self):
self.login_as(user=self.user)
group = self.create_group()
url = '/api/0/issues/{}/'.format(group.id)
response = self.client.get(url, format='json')
assert response.status_code == 200, response.content
assert response.data['id'] == str(group.id)
assert response.data['firstRelease'] is None
def test_with_first_release(self):
self.login_as(user=self.user)
group = self.create_group()
release = Release.objects.create(
project=group.project,
version='1.0',
)
GroupTagValue.objects.create(
group=group,
project=group.project,
key='sentry:release',
value=release.version,
)
url = '/api/0/issues/{}/'.format(group.id)
response = self.client.get(url, format='json')
assert response.status_code == 200, response.content
assert response.data['id'] == str(group.id)
assert response.data['firstRelease']['version'] == release.version
class GroupUpdateTest(APITestCase):
def test_resolve(self):
self.login_as(user=self.user)
group = self.create_group()
url = '/api/0/issues/{}/'.format(group.id)
response = self.client.put(url, data={
'status': 'resolved',
}, format='json')
assert response.status_code == 200, response.content
group = Group.objects.get(
id=group.id,
project=group.project.id,
)
assert group.status == GroupStatus.RESOLVED
def test_snooze_duration(self):
group = self.create_group(checksum='a' * 32, status=GroupStatus.RESOLVED)
self.login_as(user=self.user)
url = '/api/0/issues/{}/'.format(group.id)
response = self.client.put(url, data={
'status': 'muted',
'snoozeDuration': 30,
}, format='json')
assert response.status_code == 200
snooze = GroupSnooze.objects.get(group=group)
assert snooze.until > timezone.now() + timedelta(minutes=29)
assert snooze.until < timezone.now() + timedelta(minutes=31)
assert response.data['statusDetails']['snoozeUntil'] == snooze.until
group = Group.objects.get(id=group.id)
assert group.get_status() == GroupStatus.MUTED
def test_bookmark(self):
self.login_as(user=self.user)
group = self.create_group()
url = '/api/0/issues/{}/'.format(group.id)
response = self.client.put(url, data={
'isBookmarked': '1',
}, format='json')
assert response.status_code == 200, response.content
# ensure we've created the bookmark
assert GroupBookmark.objects.filter(
group=group, user=self.user).exists()
def test_assign(self):
self.login_as(user=self.user)
group = self.create_group()
url = '/api/0/issues/{}/'.format(group.id)
response = self.client.put(url, data={
'assignedTo': self.user.username,
}, format='json')
assert response.status_code == 200, response.content
assert GroupAssignee.objects.filter(
group=group, user=self.user
).exists()
assert Activity.objects.filter(
group=group, user=self.user, type=Activity.ASSIGNED,
).count() == 1
response = self.client.put(url, format='json')
assert response.status_code == 200, response.content
assert GroupAssignee.objects.filter(
group=group, user=self.user
).exists()
response = self.client.put(url, data={
'assignedTo': '',
}, format='json')
assert response.status_code == 200, response.content
assert not GroupAssignee.objects.filter(
group=group, user=self.user
).exists()
def test_mark_seen(self):
self.login_as(user=self.user)
group = self.create_group()
url = '/api/0/issues/{}/'.format(group.id)
response = self.client.put(url, data={
'hasSeen': '1',
}, format='json')
assert response.status_code == 200, response.content
assert GroupSeen.objects.filter(
group=group, user=self.user).exists()
response = self.client.put(url, data={
'hasSeen': '0',
}, format='json')
assert response.status_code == 200, response.content
assert not GroupSeen.objects.filter(
group=group, user=self.user).exists()
def test_mark_seen_as_non_member(self):
user = self.create_user('foo@example.com', is_superuser=True)
self.login_as(user=user)
group = self.create_group()
url = '/api/0/issues/{}/'.format(group.id)
response = self.client.put(url, data={
'hasSeen': '1',
}, format='json')
assert response.status_code == 200, response.content
assert not GroupSeen.objects.filter(
group=group, user=self.user).exists()
class GroupDeleteTest(APITestCase):
def test_delete(self):
self.login_as(user=self.user)
group = self.create_group()
url = '/api/0/issues/{}/'.format(group.id)
with self.tasks():
response = self.client.delete(url, format='json')
assert response.status_code == 202, response.content
group = Group.objects.filter(id=group.id).exists()
assert not group
|
Have a lovely afternoon my dear..
And let your soul enjoy the love our world brings!
True happiness really is in celebrating the present moment!
The potential to change your life lies in the simplest of steps..
misconceptions.. Remember, with hard work there are no limits ..
Only thinking of what we would do when we meet this evening..
I just want to tell you how dear you are to me.. I wish to be with you all times of the day..
Have a lovely afternoon, handsome .!
It's a wonderful early afternoon, I hope it gets even better by hearing your nice sweet voice. Good afternoon, my love.
Our prime purpose in this life is to help others. And if you can't help them, at least don't hurt them... Good Afternoon!
Often we stand at life’s crossroads & view, what we think is the END. But a true friends tells 'relax dude, its just a BEND & not the END'.
i wish u a happy afternoon.
making history is so difficult.
Nice friendship is Like The breathing air… You Will Never See it… But You Will Always Feel its Presence.Good Afternoon.
Anybody can love a rose but no one loves a leaf that adds beauty to rose!
Anybdy can love a rose but no 1 luvs a leaf that adds beauty 2 rose!
but luv t 1 who cn make Ur life beautiful..
If we didn’t have suffering, we would never learn Compassion!
MORAL: Don’t love some1 who is beautiful, but love the 1 who can make Ur life beautiful..Good Afternoon.
its bettr 2 stay quiet.
Bcoz if love wasn’t enough..
do u think ur words will mattr….!!
This category was added on 5/4/2018 3:56:36 PM, contains 44 SMS, images and status messages, was last last updated on 5/4/2018 3:56:36 PM, has 11 images, had 24699 visitors this month so far. Feel free to share these good, clean, happy Good Afternoon messages with your friends on WhatsApp and Facebook.
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
"""Tests for UTMPX file parser."""
import unittest
from plaso.formatters import utmpx # pylint: disable=unused-import
from plaso.lib import timelib
from plaso.parsers import utmpx
from tests import test_lib as shared_test_lib
from tests.parsers import test_lib
class UtmpxParserTest(test_lib.ParserTestCase):
"""Tests for utmpx file parser."""
@shared_test_lib.skipUnlessHasTestFile([u'utmpx_mac'])
def testParse(self):
"""Tests the Parse function."""
parser_object = utmpx.UtmpxParser()
storage_writer = self._ParseFile([u'utmpx_mac'], parser_object)
self.assertEqual(len(storage_writer.events), 6)
event_object = storage_writer.events[0]
expected_timestamp = timelib.Timestamp.CopyFromString(
u'2013-11-13 17:52:34')
self.assertEqual(event_object.timestamp, expected_timestamp)
expected_message = (
u'User: N/A Status: BOOT_TIME '
u'Computer Name: localhost Terminal: N/A')
expected_short_message = u'User: N/A'
self._TestGetMessageStrings(
event_object, expected_message, expected_short_message)
event_object = storage_writer.events[1]
expected_timestamp = timelib.Timestamp.CopyFromString(
u'2013-11-13 17:52:41.736713')
self.assertEqual(event_object.timestamp, expected_timestamp)
self.assertEqual(event_object.user, u'moxilo')
self.assertEqual(event_object.terminal, u'console', )
self.assertEqual(event_object.status_type, 7)
self.assertEqual(event_object.computer_name, u'localhost')
expected_message = (
u'User: moxilo Status: '
u'USER_PROCESS '
u'Computer Name: localhost '
u'Terminal: console')
expected_short_message = u'User: moxilo'
self._TestGetMessageStrings(
event_object, expected_message, expected_short_message)
event_object = storage_writer.events[4]
expected_timestamp = timelib.Timestamp.CopyFromString(
u'2013-11-14 04:32:56.641464')
self.assertEqual(event_object.timestamp, expected_timestamp)
self.assertEqual(event_object.user, u'moxilo')
self.assertEqual(event_object.terminal, u'ttys002')
self.assertEqual(event_object.status_type, 8)
expected_message = (
u'User: moxilo Status: '
u'DEAD_PROCESS '
u'Computer Name: localhost '
u'Terminal: ttys002')
expected_short_message = u'User: moxilo'
self._TestGetMessageStrings(
event_object, expected_message, expected_short_message)
if __name__ == '__main__':
unittest.main()
|
You can have faith that our knowledgeable and experienced contractors in Colorado Springs will be licensed and ready to handle any request you might put to us. We have the knowhow and ability to remodel and renovate any property in to a dream setting.
UAC general Contractors Colorado Springs focuses on home kitchen and bathroom remodeling to property owners living in Colorado Springs and surrounding areas. You can feel at ease knowing that once you hire our services a licensed contractor with many years of experience will be managing the project using a team of smart, honest and committed trade’s people. They know how to give you the look you want at the budget you have in mind while keeping the quality to a very high standard.
As a local general contractor in Colorado Springs we can get you the best discounts on appliances and building material for any home remodel or renovation from building suppliers and stores within the area. Because we have built a great relationship with these suppliers all the home remodeling material you want ordered will arrive on time and in fantastic condition the day it’s to be installed.
|
import time as time
import datetime as dt
dateEnd = dt.datetime.now()
strEnd = dateEnd.strftime('%Y-%m-%d %H:%M:%S')
print('now:'+strEnd)
'''mid
以下程序将一个确定期间字符串日期等分为若干段,之后分段输出字符串
'''
timeFrom = '2016-05-20 00:00:00'
timeTo = '2016-05-30 00:00:00'
phases = 3
print timeFrom,timeTo
#mid 1)str to pyTimeStamp
timeStampFrom = int(time.mktime(time.strptime(timeFrom, "%Y-%m-%d %H:%M:%S")))
timeStampTo = int(time.mktime(time.strptime(timeTo, "%Y-%m-%d %H:%M:%S")))
#mid 2)str to datetime
timeFrom = dt.datetime.strptime(timeFrom,'%Y-%m-%d %H:%M:%S')
timeTo = dt.datetime.strptime(timeTo,'%Y-%m-%d %H:%M:%S')
interval = (timeStampTo - timeStampFrom)/phases
startTimeStamp = timeStampFrom
for index in range(phases):
endTimeStamp = startTimeStamp + interval
#mid 3)pyTimeStamp to datetime
timeFromDatetime = dt.datetime.utcfromtimestamp(startTimeStamp)
timeToDatetime = dt.datetime.utcfromtimestamp(endTimeStamp)
#mid 4)datetime to str
strTimeFrom = timeFromDatetime.strftime("%Y-%m-%d %H:%M:%S")
strTimeTo = timeToDatetime.strftime("%Y-%m-%d %H:%M:%S")
print '------',strTimeFrom,strTimeTo
startTimeStamp = endTimeStamp
|
Amazing 6-Bedroom VD-2 Villa forSale in Damac, Queen Meadows!!!
Bigger Plot Size | Single Row | 4 Bedroom | TH-H | Queen Meadows.
Looking to rent a villa in Queens Meadows instead?
|
# -*- coding: utf-8 -*-
# Licensed under a 3-clause BSD style license - see LICENSE.rst
# Standard library
import re
import textwrap
import warnings
from datetime import datetime
from urllib.request import urlopen, Request
# Third-party
from astropy import time as atime
from astropy.utils.console import color_print, _color_text
from . import get_sun
__all__ = []
class HumanError(ValueError):
pass
class CelestialError(ValueError):
pass
def get_sign(dt):
"""
"""
if ((int(dt.month) == 12 and int(dt.day) >= 22)or(int(dt.month) == 1 and int(dt.day) <= 19)):
zodiac_sign = "capricorn"
elif ((int(dt.month) == 1 and int(dt.day) >= 20)or(int(dt.month) == 2 and int(dt.day) <= 17)):
zodiac_sign = "aquarius"
elif ((int(dt.month) == 2 and int(dt.day) >= 18)or(int(dt.month) == 3 and int(dt.day) <= 19)):
zodiac_sign = "pisces"
elif ((int(dt.month) == 3 and int(dt.day) >= 20)or(int(dt.month) == 4 and int(dt.day) <= 19)):
zodiac_sign = "aries"
elif ((int(dt.month) == 4 and int(dt.day) >= 20)or(int(dt.month) == 5 and int(dt.day) <= 20)):
zodiac_sign = "taurus"
elif ((int(dt.month) == 5 and int(dt.day) >= 21)or(int(dt.month) == 6 and int(dt.day) <= 20)):
zodiac_sign = "gemini"
elif ((int(dt.month) == 6 and int(dt.day) >= 21)or(int(dt.month) == 7 and int(dt.day) <= 22)):
zodiac_sign = "cancer"
elif ((int(dt.month) == 7 and int(dt.day) >= 23)or(int(dt.month) == 8 and int(dt.day) <= 22)):
zodiac_sign = "leo"
elif ((int(dt.month) == 8 and int(dt.day) >= 23)or(int(dt.month) == 9 and int(dt.day) <= 22)):
zodiac_sign = "virgo"
elif ((int(dt.month) == 9 and int(dt.day) >= 23)or(int(dt.month) == 10 and int(dt.day) <= 22)):
zodiac_sign = "libra"
elif ((int(dt.month) == 10 and int(dt.day) >= 23)or(int(dt.month) == 11 and int(dt.day) <= 21)):
zodiac_sign = "scorpio"
elif ((int(dt.month) == 11 and int(dt.day) >= 22)or(int(dt.month) == 12 and int(dt.day) <= 21)):
zodiac_sign = "sagittarius"
return zodiac_sign
_VALID_SIGNS = ["capricorn", "aquarius", "pisces", "aries", "taurus", "gemini",
"cancer", "leo", "virgo", "libra", "scorpio", "sagittarius"]
# Some of the constellation names map to different astrological "sign names".
# Astrologers really needs to talk to the IAU...
_CONST_TO_SIGNS = {'capricornus': 'capricorn', 'scorpius': 'scorpio'}
_ZODIAC = ((1900, "rat"), (1901, "ox"), (1902, "tiger"),
(1903, "rabbit"), (1904, "dragon"), (1905, "snake"),
(1906, "horse"), (1907, "goat"), (1908, "monkey"),
(1909, "rooster"), (1910, "dog"), (1911, "pig"))
# https://stackoverflow.com/questions/12791871/chinese-zodiac-python-program
def _get_zodiac(yr):
return _ZODIAC[(yr - _ZODIAC[0][0]) % 12][1]
def horoscope(birthday, corrected=True, chinese=False):
"""
Enter your birthday as an `astropy.time.Time` object and
receive a mystical horoscope about things to come.
Parameter
---------
birthday : `astropy.time.Time` or str
Your birthday as a `datetime.datetime` or `astropy.time.Time` object
or "YYYY-MM-DD"string.
corrected : bool
Whether to account for the precession of the Earth instead of using the
ancient Greek dates for the signs. After all, you do want your *real*
horoscope, not a cheap inaccurate approximation, right?
chinese : bool
Chinese annual zodiac wisdom instead of Western one.
Returns
-------
Infinite wisdom, condensed into astrologically precise prose.
Notes
-----
This function was implemented on April 1. Take note of that date.
"""
from bs4 import BeautifulSoup
today = datetime.now()
err_msg = "Invalid response from celestial gods (failed to load horoscope)."
headers = {'User-Agent': 'foo/bar'}
special_words = {
'([sS]tar[s^ ]*)': 'yellow',
'([yY]ou[^ ]*)': 'magenta',
'([pP]lay[^ ]*)': 'blue',
'([hH]eart)': 'red',
'([fF]ate)': 'lightgreen',
}
if isinstance(birthday, str):
birthday = datetime.strptime(birthday, '%Y-%m-%d')
if chinese:
# TODO: Make this more accurate by using the actual date, not just year
# Might need third-party tool like https://pypi.org/project/lunardate
zodiac_sign = _get_zodiac(birthday.year)
url = ('https://www.horoscope.com/us/horoscopes/yearly/'
'{}-chinese-horoscope-{}.aspx'.format(today.year, zodiac_sign))
summ_title_sfx = f'in {today.year}'
try:
res = Request(url, headers=headers)
with urlopen(res) as f:
try:
doc = BeautifulSoup(f, 'html.parser')
# TODO: Also include Love, Family & Friends, Work, Money, More?
item = doc.find(id='overview')
desc = item.getText()
except Exception:
raise CelestialError(err_msg)
except Exception:
raise CelestialError(err_msg)
else:
birthday = atime.Time(birthday)
if corrected:
with warnings.catch_warnings():
warnings.simplefilter('ignore') # Ignore ErfaWarning
zodiac_sign = get_sun(birthday).get_constellation().lower()
zodiac_sign = _CONST_TO_SIGNS.get(zodiac_sign, zodiac_sign)
if zodiac_sign not in _VALID_SIGNS:
raise HumanError('On your birthday the sun was in {}, which is not '
'a sign of the zodiac. You must not exist. Or '
'maybe you can settle for '
'corrected=False.'.format(zodiac_sign.title()))
else:
zodiac_sign = get_sign(birthday.to_datetime())
url = f"http://www.astrology.com/us/horoscope/daily-overview.aspx?sign={zodiac_sign}"
summ_title_sfx = f"on {today.strftime('%Y-%m-%d')}"
res = Request(url, headers=headers)
with urlopen(res) as f:
try:
doc = BeautifulSoup(f, 'html.parser')
item = doc.find('span', {'class': 'date'})
desc = item.parent.getText()
except Exception:
raise CelestialError(err_msg)
print("*"*79)
color_print(f"Horoscope for {zodiac_sign.capitalize()} {summ_title_sfx}:",
'green')
print("*"*79)
for block in textwrap.wrap(desc, 79):
split_block = block.split()
for i, word in enumerate(split_block):
for re_word in special_words.keys():
match = re.search(re_word, word)
if match is None:
continue
split_block[i] = _color_text(match.groups()[0], special_words[re_word])
print(" ".join(split_block))
def inject_horoscope():
import astropy
astropy._yourfuture = horoscope
inject_horoscope()
|
After more than fourteen years of consulting, UK Models’ strong team of experienced professionals knows the modelling industry inside and out. Having worked with thousands of models, helping them build their confidence and then their careers, UK Models knows that a professional life in fashion can take a multitude of different paths. The company’s goal is to build the model’s brand, guide them to their most promising route of success, and mentor them throughout their professional development process until they find an agency with which they feel comfortable or otherwise establish a substantial freelancing network and acquire the necessary skills to succeed in this modelling direction.
At the core of UK Models’ structure is the protection of the aspiring models’ wellbeing. The team emphasizes confidence rather than bodily perfection, because they know modelling takes far more than beauty. Instead, personality, professionalism and overall skill steer a client’s path to success – not just their general looks.
Aside from their rich website of advice, personal accounts on the industry, and data, a session with UK Models provides unparalleled insight into the field. Starting with an in-depth discussion of the aspiring model’s intentions, the company assesses the client’s marketability while taking their client’s own interests into account. After mapping out potential career paths and the steps necessary to bring each to fruition, the model has the opportunity to have their hair and make-up done for a professional photo shoot through Blue Rooms Studio. This photoshoot experience is usually the first step in building a model’s brand, getting them comfortable in front of the camera and setting them off on their path to success. Upon leaving their very first session, clients may also have the beginnings of a modelling portfolio.
Perhaps the most important service UK Models offer is their blatant honesty. The industry can be difficult and lacks a transparency making it difficult for many prospective models to decipher whether they have what it takes. It’s the team’s business to help identify potential avenues for a model to achieve success as well as provide clarity on what is or is not an appropriate course of action for them. Each model has a unique journey before them and in identifying opportunities that will push their careers forward rather than stunt it, the UK Models’ team expedites that path to achievement.
With overwhelmingly positive reviews from clients, UK Models has an undeniable reputation for welcoming a model with open arms as soon as they walk in the door, to ensure they feel comfortable enough to discuss both their dreams and fears. However, they emphasize that the models will encounter no sugar-coating. They ensure that all aspiring models understand that the modelling business is not glamorous, pays little at the beginning, and requires a flexible schedule for extensive travel. Without that clear understanding, clients will otherwise set themselves up for a sad or disenchanting experience that will more than likely dissuade them from the industry altogether.
Instead of hiding the harsh-realities, the staff focuses on confronting these issues with customers head-on so when they ultimately encounter them on their own, they are prepared to successfully navigate through our around these difficult pitfalls. While UK Models offers the truth, they also sit down at the drawing board with their clients and map out ways to avoid any deception or low-paying jobs, and mechanisms to adapt to an ever-changing lifestyle if the aspiring model is sure they want to embark on the journey. The team understands that many of the aspiring model clients who come to the company, don’t have firsthand experience about what the industry entails. It’s their job to provide them with that crash course.
Like many careers, it is challenging to break into the modelling world without a connection – UK Models is devoted to serving as that connection. From acting as a confidants, career planners, portfolio directors and more, the staff is well-versed in developing young hopefuls.
With a website full of advice blogs, and an exclusive section for parents of younger aspiring models, any client may find success with the help of UK Models. The first step is as simple as reaching out on our site!
|
verbose = True
saveFig = True
showFig = True
if verbose:
print "Loading general modules..."
import numpy as np
import sys
if verbose:
print "Loading matplotlib module..."
import matplotlib.pyplot as plt
if verbose:
print "Loading custom functions..."
from defaultGetPower import readPowerFile
if verbose:
print " Reading power file..."
littleFile = 'rampRateTest/powerlog1.log'
#bigFile = 'experiments/cgroups/1456424681.514702900/powerlog.log'
bigFile = 'experiments/signal_insert_delays/1452732970.201413700/powerlog.log'
#type = 'big'
type = 'little'
if type == 'little':
file = littleFile
else:
file = bigFile
powerData = readPowerFile(file,1000,verbose)
#powerData = powerData[:, [0,4,5,6,7] ]
powerData = powerData[:, [0,4,5,6,7] ]
ignoreTime = 10
if showFig:
if type == 'little':
time = np.linspace(0,powerData[-1,0]-powerData[0,0],powerData.shape[0])
data = np.zeros(time.shape[0])
data[:] = np.sum(powerData[:,1:],axis=1)
else:
start = 30000+2950
len = 150
#start = 0
#len=10000
# The data is ten samples separated.
data = np.zeros(len)
for i in range(4):
data += powerData[start+10*i:start+10*i+len,i+1]
time = np.linspace(0,powerData[start+len-1,0]-powerData[start,0],len)
plt.plot(time,data)
plt.title('Example of fast ramp rate for four-server cluster')
plt.xlabel('Time (seconds)')
plt.ylabel('Cluster power (W)')
plt.show()
|
I think this fingerpicking use of pedal point must have its origin in the piano style of playing. (The same as walking bass) The best thing is to play the examples slow enough at first so that you give a little time to your neurons to make new links. I've learned too that isolating difficulties helps a lot in making things finally work. Don't think this is too hard a job at all, even if you are just beginning. Learning to play things like these has been very rewarding to me and I hope you will enjoy the experience too.
Just in case you need it, this is a list of the chords that go along the pedal point notes with their names and composition.
|
import pytest
import sys
import os
import shutil
import filecmp
from os.path import join
from traitlets.config import Config
from datetime import datetime
from ...apps.api import NbGraderAPI
from ...coursedir import CourseDirectory
from ...utils import rmtree, get_username, parse_utc
from .. import run_nbgrader
from .base import BaseTestApp
from .conftest import notwindows, windows
@pytest.fixture
def api(request, course_dir, db, exchange, cache):
config = Config()
config.CourseDirectory.course_id = "abc101"
config.Exchange.root = exchange
config.Exchange.cache = cache
config.CourseDirectory.root = course_dir
config.CourseDirectory.db_url = db
coursedir = CourseDirectory(config=config)
api = NbGraderAPI(coursedir, config=config)
return api
class TestNbGraderAPI(BaseTestApp):
if sys.platform == 'win32':
tz = "Coordinated Universal Time"
else:
tz = "UTC"
def test_get_source_assignments(self, api, course_dir):
assert api.get_source_assignments() == set([])
self._empty_notebook(join(course_dir, "source", "ps1", "problem1.ipynb"))
self._empty_notebook(join(course_dir, "source", "ps2", "problem1.ipynb"))
self._make_file(join(course_dir, "source", "blah"))
assert api.get_source_assignments() == {"ps1", "ps2"}
@notwindows
def test_get_released_assignments(self, api, exchange, course_dir):
assert api.get_released_assignments() == set([])
self._copy_file(join("files", "test.ipynb"), join(course_dir, "release", "ps1", "p1.ipynb"))
run_nbgrader(["release_assignment", "ps1", "--course", "abc101", "--Exchange.root={}".format(exchange)])
assert api.get_released_assignments() == {"ps1"}
api.course_id = None
assert api.get_released_assignments() == set([])
@windows
def test_get_released_assignments_windows(self, api, exchange, course_dir):
assert api.get_released_assignments() == set([])
api.course_id = 'abc101'
assert api.get_released_assignments() == set([])
def test_get_submitted_students(self, api, course_dir):
assert api.get_submitted_students("ps1") == set([])
self._empty_notebook(join(course_dir, "submitted", "foo", "ps1", "problem1.ipynb"))
self._empty_notebook(join(course_dir, "submitted", "bar", "ps1", "problem1.ipynb"))
self._make_file(join(course_dir, "submitted", "blah"))
assert api.get_submitted_students("ps1") == {"foo", "bar"}
assert api.get_submitted_students("*") == {"foo", "bar"}
def test_get_submitted_timestamp(self, api, course_dir):
assert api.get_submitted_timestamp("ps1", "foo") is None
self._empty_notebook(join(course_dir, "submitted", "foo", "ps1", "problem1.ipynb"))
assert api.get_submitted_timestamp("ps1", "foo") is None
timestamp = datetime.now()
self._make_file(join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"), contents=timestamp.isoformat())
assert api.get_submitted_timestamp("ps1", "foo") == timestamp
def test_get_autograded_students(self, api, course_dir, db):
self._empty_notebook(join(course_dir, "source", "ps1", "problem1.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
# submitted and autograded exist, but not in the database
self._empty_notebook(join(course_dir, "submitted", "foo", "ps1", "problem1.ipynb"))
timestamp = datetime.now()
self._make_file(join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"), contents=timestamp.isoformat())
self._empty_notebook(join(course_dir, "autograded", "foo", "ps1", "problem1.ipynb"))
self._make_file(join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"), contents=timestamp.isoformat())
assert api.get_autograded_students("ps1") == set([])
# run autograde so things are consistent
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
assert api.get_autograded_students("ps1") == {"foo"}
# updated submission
timestamp = datetime.now()
self._make_file(join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"), contents=timestamp.isoformat())
assert api.get_autograded_students("ps1") == set([])
def test_get_autograded_students_no_timestamps(self, api, course_dir, db):
self._empty_notebook(join(course_dir, "source", "ps1", "problem1.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
# submitted and autograded exist, but not in the database
self._empty_notebook(join(course_dir, "submitted", "foo", "ps1", "problem1.ipynb"))
self._empty_notebook(join(course_dir, "autograded", "foo", "ps1", "problem1.ipynb"))
assert api.get_autograded_students("ps1") == set([])
# run autograde so things are consistent
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
assert api.get_autograded_students("ps1") == {"foo"}
# updated submission
timestamp = datetime.now()
self._make_file(join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"), contents=timestamp.isoformat())
assert api.get_autograded_students("ps1") == set([])
def test_get_assignment(self, api, course_dir, db, exchange):
keys = set([
'average_code_score', 'average_score', 'average_written_score',
'duedate', 'name', 'num_submissions', 'release_path', 'releaseable',
'source_path', 'status', 'id', 'max_code_score', 'max_score',
'max_written_score', 'display_duedate', 'duedate_timezone',
'duedate_notimezone',
'max_task_score', 'average_task_score'])
default = {
"average_code_score": 0,
"average_score": 0,
"average_written_score": 0,
"average_task_score": 0,
"duedate": None,
"display_duedate": None,
"duedate_timezone": "+0000",
"duedate_notimezone": None,
"name": "ps1",
"num_submissions": 0,
"release_path": None,
"releaseable": True if sys.platform != 'win32' else False,
"source_path": join("source", "ps1"),
"status": "draft",
"id": None,
"max_code_score": 0,
"max_score": 0,
"max_written_score": 0,
"max_task_score": 0
}
# check that return value is None when there is no assignment
a = api.get_assignment("ps1")
assert a is None
# check the values when the source assignment exists, but hasn't been
# released yet
self._copy_file(join("files", "test.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
a = api.get_assignment("ps1")
assert set(a.keys()) == keys
target = default.copy()
assert a == target
# check that it is not releasable if the course id isn't set
api.course_id = None
a = api.get_assignment("ps1")
assert set(a.keys()) == keys
target = default.copy()
target["releaseable"] = False
assert a == target
# check the values once the student version of the assignment has been created
api.course_id = "abc101"
run_nbgrader(["generate_assignment", "ps1", "--db", db])
a = api.get_assignment("ps1")
assert set(a.keys()) == keys
target = default.copy()
target["release_path"] = join("release", "ps1")
target["id"] = a["id"]
target["max_code_score"] = 5
target["max_score"] = 6
target["max_written_score"] = 1
target["max_task_score"] = 1
assert a == target
# check that timestamps are handled correctly
with api.gradebook as gb:
assignment = gb.find_assignment("ps1")
assignment.duedate = parse_utc("2017-07-05 12:22:08 UTC")
gb.db.commit()
a = api.get_assignment("ps1")
default["duedate"] = "2017-07-05T12:22:08"
default["display_duedate"] = "2017-07-05 12:22:08 {}".format(self.tz)
default["duedate_notimezone"] = "2017-07-05T12:22:08"
assert a["duedate"] == default["duedate"]
assert a["display_duedate"] == default["display_duedate"]
assert a["duedate_notimezone"] == default["duedate_notimezone"]
assert a["duedate_timezone"] == default["duedate_timezone"]
# check the values once the assignment has been released and unreleased
if sys.platform != "win32":
run_nbgrader(["release_assignment", "ps1", "--course", "abc101", "--Exchange.root={}".format(exchange)])
a = api.get_assignment("ps1")
assert set(a.keys()) == keys
target = default.copy()
target["release_path"] = join("release", "ps1")
target["id"] = a["id"]
target["max_code_score"] = 5
target["max_score"] = 6
target["max_written_score"] = 1
target["max_task_score"] = 1
target["releaseable"] = True
target["status"] = "released"
assert a == target
run_nbgrader(["list", "ps1", "--course", "abc101", "--Exchange.root={}".format(exchange), "--remove"])
a = api.get_assignment("ps1")
assert set(a.keys()) == keys
target = default.copy()
target["release_path"] = join("release", "ps1")
target["id"] = a["id"]
target["max_code_score"] = 5
target["max_score"] = 6
target["max_written_score"] = 1
target["max_task_score"] = 1
assert a == target
# check the values once there are submissions as well
self._empty_notebook(join(course_dir, "submitted", "foo", "ps1", "problem1.ipynb"))
self._empty_notebook(join(course_dir, "submitted", "bar", "ps1", "problem1.ipynb"))
a = api.get_assignment("ps1")
assert set(a.keys()) == keys
target = default.copy()
target["release_path"] = join("release", "ps1")
target["id"] = a["id"]
target["max_code_score"] = 5
target["max_score"] = 6
target["max_written_score"] = 1
target["max_task_score"] = 1
target["num_submissions"] = 2
assert a == target
def test_get_assignments(self, api, course_dir):
assert api.get_assignments() == []
self._empty_notebook(join(course_dir, "source", "ps1", "problem1.ipynb"))
self._empty_notebook(join(course_dir, "source", "ps2", "problem1.ipynb"))
a = api.get_assignments()
assert len(a) == 2
assert a[0] == api.get_assignment("ps1")
assert a[1] == api.get_assignment("ps2")
def test_get_notebooks(self, api, course_dir, db):
keys = set([
'average_code_score', 'average_score', 'average_written_score',
'name', 'id', 'max_code_score', 'max_score', 'max_written_score',
'max_task_score', 'average_task_score',
'needs_manual_grade', 'num_submissions'])
default = {
"name": "p1",
"id": None,
"average_code_score": 0,
"max_code_score": 0,
"average_score": 0,
"max_score": 0,
"average_written_score": 0,
"max_written_score": 0,
"average_task_score": 0,
"max_task_score": 0,
"needs_manual_grade": False,
"num_submissions": 0
}
# check that return value is None when there is no assignment
n = api.get_notebooks("ps1")
assert n == []
# check values before nbgrader generate_assignment is run
self._copy_file(join("files", "test.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
n1, = api.get_notebooks("ps1")
assert set(n1.keys()) == keys
assert n1 == default.copy()
# add it to the database (but don't assign yet)
with api.gradebook as gb:
gb.update_or_create_assignment("ps1")
n1, = api.get_notebooks("ps1")
assert set(n1.keys()) == keys
assert n1 == default.copy()
# check values after nbgrader generate_assignment is run
run_nbgrader(["generate_assignment", "ps1", "--db", db, "--force"])
n1, = api.get_notebooks("ps1")
assert set(n1.keys()) == keys
target = default.copy()
target["id"] = n1["id"]
target["max_code_score"] = 5
target["max_score"] = 6
target["max_written_score"] = 1
assert n1 == target
def test_get_submission(self, api, course_dir, db):
keys = set([
"id", "name", "student", "last_name", "first_name", "score",
"max_score", "code_score", "max_code_score", "written_score",
"max_written_score", "task_score", "max_task_score", "needs_manual_grade", "autograded",
"timestamp", "submitted", "display_timestamp"])
default = {
"id": None,
"name": "ps1",
"student": "foo",
"last_name": None,
"first_name": None,
"score": 0,
"max_score": 0,
"code_score": 0,
"max_code_score": 0,
"written_score": 0,
"max_written_score": 0,
"task_score": 0,
"max_task_score": 0,
"needs_manual_grade": False,
"autograded": False,
"timestamp": None,
"display_timestamp": None,
"submitted": False
}
s = api.get_submission("ps1", "foo")
assert s == default.copy()
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
self._make_file(join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"), contents="2017-07-05T12:32:56.123456")
s = api.get_submission("ps1", "foo")
assert set(s.keys()) == keys
target = default.copy()
target["submitted"] = True
target["timestamp"] = "2017-07-05T12:32:56.123456"
target["display_timestamp"] = "2017-07-05 12:32:56 {}".format(self.tz)
assert s == target
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
s = api.get_submission("ps1", "foo")
target = default.copy()
target["id"] = s["id"]
target["autograded"] = True
target["submitted"] = True
target["timestamp"] = "2017-07-05T12:32:56.123456"
target["display_timestamp"] = "2017-07-05 12:32:56 {}".format(self.tz)
target["code_score"] = 2
target["max_code_score"] = 5
target["score"] = 2
target["max_score"] = 7
target["written_score"] = 0
target["max_written_score"] = 2
target["needs_manual_grade"] = True
assert s == target
def test_get_submission_no_timestamp(self, api, course_dir, db):
keys = set([
"id", "name", "student", "last_name", "first_name", "score",
"max_score", "code_score", "max_code_score", "written_score",
"max_written_score", "task_score", "max_task_score", "needs_manual_grade", "autograded",
"timestamp", "submitted", "display_timestamp"])
default = {
"id": None,
"name": "ps1",
"student": "foo",
"last_name": None,
"first_name": None,
"score": 0,
"max_score": 0,
"code_score": 0,
"max_code_score": 0,
"written_score": 0,
"max_written_score": 0,
"task_score": 0,
"max_task_score": 0,
"needs_manual_grade": False,
"autograded": False,
"timestamp": None,
"display_timestamp": None,
"submitted": False
}
s = api.get_submission("ps1", "foo")
assert s == default.copy()
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
s = api.get_submission("ps1", "foo")
assert set(s.keys()) == keys
target = default.copy()
target["submitted"] = True
assert s == target
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
s = api.get_submission("ps1", "foo")
target = default.copy()
target["id"] = s["id"]
target["autograded"] = True
target["submitted"] = True
target["code_score"] = 2
target["max_code_score"] = 5
target["score"] = 2
target["max_score"] = 7
target["written_score"] = 0
target["max_written_score"] = 2
target["needs_manual_grade"] = True
assert s == target
def test_get_submissions(self, api, course_dir, db):
assert api.get_submissions("ps1") == []
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
timestamp = datetime.now()
self._make_file(join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"), contents=timestamp.isoformat())
s1, = api.get_submissions("ps1")
assert s1 == api.get_submission("ps1", "foo")
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
s1, = api.get_submissions("ps1")
assert s1 == api.get_submission("ps1", "foo")
def test_filter_existing_notebooks(self, api, course_dir, db):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p2.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
with api.gradebook as gb:
notebooks = gb.notebook_submissions("p1", "ps1")
s = api._filter_existing_notebooks("ps1", notebooks)
assert s == notebooks
notebooks = gb.notebook_submissions("p2", "ps1")
s = api._filter_existing_notebooks("ps1", notebooks)
assert s == []
@notwindows
def test_filter_existing_notebooks_strict(self, api, course_dir, db):
api.config.ExchangeSubmit.strict = True
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p2.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
with api.gradebook as gb:
notebooks = gb.notebook_submissions("p1", "ps1")
s = api._filter_existing_notebooks("ps1", notebooks)
assert s == notebooks
notebooks = gb.notebook_submissions("p2", "ps1")
s = api._filter_existing_notebooks("ps1", notebooks)
assert s == notebooks
def test_get_notebook_submission_indices(self, api, course_dir, db):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "bar", "ps1", "p1.ipynb"))
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
with api.gradebook as gb:
notebooks = gb.notebook_submissions("p1", "ps1")
notebooks.sort(key=lambda x: x.id)
idx = api.get_notebook_submission_indices("ps1", "p1")
assert idx[notebooks[0].id] == 0
assert idx[notebooks[1].id] == 1
def test_get_notebook_submissions(self, api, course_dir, db):
assert api.get_notebook_submissions("ps1", "p1") == []
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "bar", "ps1", "p1.ipynb"))
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "baz", "ps1", "p1.ipynb"))
s = api.get_notebook_submissions("ps1", "p1")
assert len(s) == 2
with api.gradebook as gb:
notebooks = gb.notebook_submissions("p1", "ps1")
notebooks.sort(key=lambda x: x.id)
notebooks = [x.to_dict() for x in notebooks]
for i in range(2):
notebooks[i]["index"] = i
assert s[i] == notebooks[i]
def test_get_student(self, api, course_dir, db):
assert api.get_student("foo") is None
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
assert api.get_student("foo") == {
"id": "foo",
"last_name": None,
"first_name": None,
"email": None,
"lms_user_id": None,
"max_score": 0,
"score": 0
}
rmtree(join(course_dir, "submitted", "foo"))
with api.gradebook as gb:
gb.add_student("foo")
assert api.get_student("foo") == {
"id": "foo",
"last_name": None,
"first_name": None,
"email": None,
"lms_user_id": None,
"max_score": 0,
"score": 0
}
gb.update_or_create_student("foo", last_name="Foo", first_name="A", email="a.foo@email.com", lms_user_id="230")
assert api.get_student("foo") == {
"id": "foo",
"last_name": "Foo",
"first_name": "A",
"email": "a.foo@email.com",
"lms_user_id": "230",
"max_score": 0,
"score": 0
}
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
assert api.get_student("foo") == {
"id": "foo",
"last_name": "Foo",
"first_name": "A",
"email": "a.foo@email.com",
"lms_user_id": "230",
"max_score": 7,
"score": 2
}
def test_get_students(self, api, course_dir):
assert api.get_students() == []
with api.gradebook as gb:
gb.update_or_create_student("foo", last_name="Foo", first_name="A", email="a.foo@email.com", lms_user_id=None)
s1 = {
"id": "foo",
"last_name": "Foo",
"first_name": "A",
"email": "a.foo@email.com",
"lms_user_id": None,
"max_score": 0,
"score": 0
}
assert api.get_students() == [s1]
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "bar", "ps1", "p1.ipynb"))
s2 = {
"id": "bar",
"last_name": None,
"first_name": None,
"email": None,
"lms_user_id": None,
"max_score": 0,
"score": 0
}
assert api.get_students() == [s1, s2]
def test_get_student_submissions(self, api, course_dir, db):
assert api.get_student_submissions("foo") == []
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
timestamp = datetime.now()
self._make_file(join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"), contents=timestamp.isoformat())
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
assert api.get_student_submissions("foo") == [api.get_submission("ps1", "foo")]
def test_get_student_notebook_submissions(self, api, course_dir, db):
assert api.get_student_notebook_submissions("foo", "ps1") == []
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p2.ipynb"))
run_nbgrader(["generate_assignment", "ps1", "--db", db])
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
run_nbgrader(["autograde", "ps1", "--no-execute", "--force", "--db", db])
s_p1, s_p2 = api.get_student_notebook_submissions("foo", "ps1")
p1, = api.get_notebook_submissions("ps1", "p1")
del p1["index"]
assert s_p1 == p1
assert s_p2 == {
"id": None,
"name": "p2",
"student": "foo",
"last_name": None,
"first_name": None,
"score": 0,
"max_score": 7,
"code_score": 0,
"max_code_score": 5,
"written_score": 0,
"max_written_score": 2,
"task_score": 0,
"max_task_score": 0,
"needs_manual_grade": False,
"failed_tests": False,
"flagged": False
}
def test_deprecation(self, api, course_dir, db):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
result = api.generate_assignment("ps1")
assert result["success"]
assert os.path.exists(join(course_dir, "release", "ps1", "p1.ipynb"))
os.makedirs(join(course_dir, "source", "ps2"))
result = api.assign("ps2")
assert not result["success"]
def test_generate_assignment(self, api, course_dir, db):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
result = api.generate_assignment("ps1")
assert result["success"]
assert os.path.exists(join(course_dir, "release", "ps1", "p1.ipynb"))
os.makedirs(join(course_dir, "source", "ps2"))
result = api.generate_assignment("ps2")
assert not result["success"]
@notwindows
def test_release_deprecated(self, api, course_dir, db, exchange):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
result = api.generate_assignment("ps1")
result = api.release("ps1")
assert result["success"]
assert os.path.exists(join(exchange, "abc101", "outbound", "ps1", "p1.ipynb"))
@notwindows
def test_release_and_unrelease(self, api, course_dir, db, exchange):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
result = api.generate_assignment("ps1")
result = api.release_assignment("ps1")
assert result["success"]
assert os.path.exists(join(exchange, "abc101", "outbound", "ps1", "p1.ipynb"))
result = api.release_assignment("ps1")
assert not result["success"]
result = api.unrelease("ps1")
assert result["success"]
assert not os.path.exists(join(exchange, "abc101", "outbound", "ps1", "p1.ipynb"))
@notwindows
def test_collect(self, api, course_dir, db, exchange):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
result = api.generate_assignment("ps1")
result = api.release_assignment("ps1")
result = api.collect("ps1")
assert result["success"]
assert "No submissions" in result["log"]
run_nbgrader(["fetch_assignment", "ps1", "--course", "abc101", "--Exchange.root={}".format(exchange)])
run_nbgrader(["submit", "ps1", "--course", "abc101", "--Exchange.root={}".format(exchange)])
username = get_username()
result = api.collect("ps1")
assert result["success"]
assert "Collecting submission" in result["log"]
assert os.path.exists(join(course_dir, "submitted", username, "ps1", "p1.ipynb"))
run_nbgrader(["submit", "ps1", "--course", "abc101", "--Exchange.root={}".format(exchange)])
result = api.collect("ps1")
assert result["success"]
assert "Updating submission" in result["log"]
assert os.path.exists(join(course_dir, "submitted", username, "ps1", "p1.ipynb"))
@notwindows
def test_autograde(self, api, course_dir, db):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
api.generate_assignment("ps1")
result = api.autograde("ps1", "foo")
assert not result["success"]
assert "No notebooks were matched" in result["log"]
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
result = api.autograde("ps1", "foo")
assert result["success"]
assert os.path.exists(join(course_dir, "autograded", "foo", "ps1", "p1.ipynb"))
result = api.autograde("ps1", "foo")
assert result["success"]
def test_generate_feedback(self, api, course_dir, db):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
api.generate_assignment("ps1")
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
api.autograde("ps1", "foo")
result = api.generate_feedback("ps1", "foo")
assert result["success"]
assert os.path.exists(join(course_dir, "feedback", "foo", "ps1", "p1.html"))
contents = open(join(course_dir, "feedback", "foo", "ps1", "p1.html"), "r").read()
# update the grade
with api.gradebook as gb:
nb = gb.find_submission_notebook("p1", "ps1", "foo")
nb.grades[0].manual_score = 123
gb.db.commit()
# contents shouldn't have changed, because force=False
result = api.generate_feedback("ps1", "foo", force=False)
assert result["success"]
assert os.path.exists(join(course_dir, "feedback", "foo", "ps1", "p1.html"))
new_contents = open(join(course_dir, "feedback", "foo", "ps1", "p1.html"), "r").read()
assert new_contents == contents
# contents should now have changed, because force=True
result = api.generate_feedback("ps1", "foo", force=True)
assert result["success"]
assert os.path.exists(join(course_dir, "feedback", "foo", "ps1", "p1.html"))
new_contents = open(join(course_dir, "feedback", "foo", "ps1", "p1.html"), "r").read()
assert new_contents != contents
# should not work for an empty submission
os.makedirs(join(course_dir, "submitted", "foo", "ps2"))
result = api.generate_feedback("ps2", "foo")
assert not result["success"]
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps2", "p2.ipynb"))
api.generate_assignment("ps2")
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps2", "p2.ipynb"))
api.autograde("ps2", "foo")
result = api.generate_feedback("ps2", "foo")
assert result["success"]
@notwindows
def test_release_feedback(self, api, course_dir, db, exchange):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
api.generate_assignment("ps1")
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
self._copy_file(join("files", "timestamp.txt"), join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"))
api.autograde("ps1", "foo")
api.generate_feedback("ps1", "foo")
result = api.release_feedback("ps1", "foo")
assert result["success"]
assert os.path.isdir(join(exchange, "abc101", "feedback"))
assert os.path.exists(join(exchange, "abc101", "feedback", "c600ef68c434c3d136bb5e68ea874169.html"))
# add another assignment
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps2", "p2.ipynb"))
api.generate_assignment("ps2")
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "submitted", "foo", "ps2", "p2.ipynb"))
self._copy_file(join("files", "timestamp.txt"), join(course_dir, "submitted", "foo", "ps2", "timestamp.txt"))
api.autograde("ps2", "foo")
api.generate_feedback("ps2", "foo")
result = api.release_feedback("ps2", "foo")
assert result["success"]
assert os.path.exists(join(exchange, "abc101", "feedback", "e190e1f234b633832f2069f4f8a3a680.html"))
@notwindows
def test_fetch_feedback(self, api, course_dir, db, cache):
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps1", "p1.ipynb"))
api.generate_assignment("ps1")
timestamp = open(os.path.join(os.path.dirname(__file__), "files", "timestamp.txt")).read()
cachepath = join(cache, "abc101", "foo+ps1+{}".format(timestamp))
self._copy_file(join("files", "submitted-changed.ipynb"), join(cachepath, "p1.ipynb"))
self._copy_file(join("files", "timestamp.txt"), join(cachepath, "timestamp.txt"))
self._copy_file(join("files", "submitted-changed.ipynb"), join(course_dir, "submitted", "foo", "ps1", "p1.ipynb"))
self._copy_file(join("files", "timestamp.txt"), join(course_dir, "submitted", "foo", "ps1", "timestamp.txt"))
api.autograde("ps1", "foo")
api.generate_feedback("ps1", "foo")
api.release_feedback("ps1", "foo")
result = api.fetch_feedback("ps1", "foo")
assert result["success"]
assert os.path.isdir(join("ps1", "feedback"))
assert os.path.exists(join("ps1", "feedback", timestamp, "p1.html"))
# add another assignment
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "source", "ps2", "p2.ipynb"))
api.generate_assignment("ps2")
cachepath = join(cache, "abc101", "foo+ps2+{}".format(timestamp))
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(cachepath, "p2.ipynb"))
self._copy_file(join("files", "timestamp.txt"), join(cachepath, "timestamp.txt"))
self._copy_file(join("files", "submitted-unchanged.ipynb"), join(course_dir, "submitted", "foo", "ps2", "p2.ipynb"))
self._copy_file(join("files", "timestamp.txt"), join(course_dir, "submitted", "foo", "ps2", "timestamp.txt"))
api.autograde("ps2", "foo")
api.generate_feedback("ps2", "foo")
api.release_feedback("ps2", "foo")
result = api.fetch_feedback("ps2", "foo")
assert result["success"]
assert os.path.exists(join("ps2", "feedback", timestamp, "p2.html"))
|
3D Forensic | Eminent Domain and Real Property | 3D Forensic, Inc.
Eminent Domain is one of the most important areas of law, requiring an educated jury. But it is typically not well known by jury members. Use 3D animation to educate the jury about the facts of your case. Whether it is reverse condemnation, or a land value differences from competing appraisers, allowing the jury, opposing counsel, or mediator to visualize the facts of your case gives you the advantage you need to succeed in the courtroom.
One of the most powerful tools we offer is the ability for the jury to see the future effects of a project. Our suite of 3D laser scanning, laser based photogrammetry, and 3D modeling brings the future into the courtroom today. Whether it’s to show that the project will have no effect on the neighbor’s view of the waterfront, or that the project will prevent customers from seeing your building from the road, our team can visually illustrate the true after condition.
Several Eminent Domain and Real Estate project samples are shown.
A landowner presented a valuation based on comparable development projects for this land. It might have seemed okay on charts and topographical maps, but the land had slope issues that made development very expensive and restrictive. The final valuation was much less of a process, greatly aided by computer modeling.
A landowner wanted to allow access only directly below a proposed PG&E power line. We had to show the slope issues involved. In fact, the slope is shown visually to be greater than Nob Hill in San Francisco.
|
# BEGIN_COPYRIGHT
# END_COPYRIGHT
# pylint: disable=W0105, C0103
""" ..
This example shows how to handle genetic marker data.
**NOTE:** the example assumes that the KB already contains all objects
created by the example on importing individuals.
Suppose you have run a series of genotyping assays where the DNA
sample in each well of a titer plate has been associated to a
collection of genotyping values. To import this information into the
KB, we define a collection of marker data:
"""
import sys, os, uuid
import numpy as np
from bl.vl.kb import KnowledgeBase as KB
OME_HOST = os.getenv('OME_HOST', 'localhost')
OME_USER = os.getenv('OME_USER', 'test')
OME_PASSWD = os.getenv('OME_PASSWD', 'test')
STUDY_LABEL = 'KB_EXAMPLES'
MSET_LABEL = 'DUMMY_MS'
REF_GENOME = 'DUMMY_GENOME'
kb = KB(driver='omero')(OME_HOST, OME_USER, OME_PASSWD)
marker_defs = [
# label mask index allele_flip
('A001', 'TCACTTCTTCAAAGCT[A/G]AGCTACAAGCATTATT', 0, False),
('A002', 'GGAAGGAAGAAATAAA[C/G]CAGCACTATGTCTGGC', 1, False),
('A003', 'CCGACCTAGTAGGCAA[A/G]TAGACACTGAGGCTGA', 2, False),
('A004', 'AGGTCTATGTTAATAC[A/G]GAATCAGTTTCTCACC', 3, True),
('A005', 'AGATTACCATGCAGGA[A/T]CTGTTCTGAGATTAGC', 4, False),
('A006', 'TCTACCTCTGTGACTA[C/G]AAGTGTTCTTTTATTT', 5, True),
('A007', 'AAGGCAATACTGTTCA[C/T]ATTGTATGGAAAGAAG', 6, True),
]
""" ..
See the :ref:`import tool documentation <import_tool>` for
details on the mask, index and allele flip fields. Now we have to
import the above definitions into the KB:
"""
study = kb.get_study(STUDY_LABEL)
if study is None:
sys.exit("ERROR: study '%s' not found" % STUDY_LABEL)
action = kb.create_an_action(study)
maker, model, release = (uuid.uuid4().hex for _ in xrange(3))
N, stream = len(marker_defs), iter(marker_defs)
mset = kb.create_snp_markers_set(
MSET_LABEL, maker, model, release, N, stream, action
)
""" ..
If markers have been aligned to a reference genome, we can store the
alignment information. This information must be provided in the form
of a stream of tuples that contain the marker's id within the KB
(called *vid*), chromosome number, position within the chromosome,
strand info and number of copies. Again, the :ref:`import tool docs
<import_tool>` provide more details on this matter. In this case, we
will auto-generate dummy alignment info for all markers in the set:
"""
mset.load_markers()
aligns = [(m['vid'], i+1, (i+1)*1000, True, 'A' if (i%2)== 0 else 'B', 1)
for i, m in enumerate(mset.markers)]
kb.align_snp_markers_set(mset, REF_GENOME, iter(aligns), action)
""" ...
In OMERO.biobank, genotyping data is represented by a pair of arrays:
* a 2 X N array where each column represents the probabilities of
being homozygous for allele A and B, respectively;
* a 1 X N array where each element represents a degree of confidence
related to the corresponding probabilities in the above array;
where N is the number of markers in the reference set. The following
snippet generates dummy genotyping data for all individuals enrolled
in the study:
"""
def make_dummy_data(ms):
n = len(ms)
probabilities = 0.5 * np.cast[np.float32](np.random.random((2, n)))
confidence_values = np.cast[np.float32](np.random.random(n))
return probabilities, confidence_values
data_sample_list = []
for i, ind in enumerate(kb.get_individuals(study)):
action = kb.create_an_action(study, target=ind)
config = {
'label' : uuid.uuid4().hex,
'status' : kb.DataSampleStatus.USABLE,
'action' : action,
'snpMarkersSet' : mset
}
data_sample = kb.factory.create(kb.GenotypeDataSample, config).save()
probs, confs = make_dummy_data(mset)
do = kb.add_gdo_data_object(action, data_sample, probs, confs)
data_sample_list.append(data_sample)
""" ..
Data samples keep track of the existence of genotyping data defined on
a given marker set, while data objects model actual data containers
such as files or OMERO table rows. Multiple data objects can refer to
the same data sample when they contain the same data, but encoded in
different formats, or stored in distinct copies of the same file.
For simplicity, we have defined an action that directly links each
data sample to an individual. While this approach can be used when no
information is available on the steps that led to the production of
the data sample, the KB allows to keep track of several intermediate
objects such as blood samples, dna samples, titer plates, plate wells,
etc. To iterate over the data objects we have just stored, we can do
the following:
"""
np.set_printoptions(precision=3)
print "marker set id: %s" % mset.id
for gdo in kb.get_gdo_iterator(mset, data_samples=data_sample_list):
print gdo['probs']
print gdo['confidence']
print
|
Relooker Sa Cuisine Avant Apres - This is the latest information about Relooker Sa Cuisine Avant Apres, this information can be your reference when you are confused to choose the right design for your home.
Relooker Sa Cuisine Avant Apres imágenes en internet. Lo identificamos de bien-comportado fuente. Es presentado por repartiendo en el mejor campo. Nosotros reconocer este agradable de Relooker Sa Cuisine Avant Apres gráfico podría ser el tema más temático sujeto similar a ración en google ganancia o facebook.
Interior, Tapisserie Carreaux De Ciment was posted June on this site by Sileka.net. More over Tapisserie Carreaux De Ciment has viewed by 3525 visitor.
Interior, Porte Interieur Righini was posted June on this site by Sileka.net. More over Porte Interieur Righini has viewed by 14198 visitor.
Interior, Piscine Tubulaire Hors Sol Intex was posted June on this site by Sileka.net. More over Piscine Tubulaire Hors Sol Intex has viewed by 32925 visitor.
Interior, Piscine La Tour Du Pin was posted June on this site by Sileka.net. More over Piscine La Tour Du Pin has viewed by 16347 visitor.
Interior, Porte Interieur Isolation Phonique was posted June on this site by Sileka.net. More over Porte Interieur Isolation Phonique has viewed by 28958 visitor.
Interior, Piscine Hors Sol 7 X 4 was posted June on this site by Sileka.net. More over Piscine Hors Sol 7 X 4 has viewed by 62687 visitor.
Interior, Piscine Ossature Bois was posted June on this site by Sileka.net. More over Piscine Ossature Bois has viewed by 32147 visitor.
Interior, Peinture Pour Piscine Coque Polyester was posted June on this site by Sileka.net. More over Peinture Pour Piscine Coque Polyester has viewed by 75688 visitor.
Interior, Piscine En Bloc A Bancher was posted June on this site by Sileka.net. More over Piscine En Bloc A Bancher has viewed by 98559 visitor.
Interior, Verre Pour Porte De Cuisine was posted June on this site by Sileka.net. More over Verre Pour Porte De Cuisine has viewed by 53439 visitor.
Interior, Plan De Travail Cuisine A Carreler was posted June on this site by Sileka.net. More over Plan De Travail Cuisine A Carreler has viewed by 45354 visitor.
Interior, Peinture Chambre Bebe Garcon was posted June on this site by Sileka.net. More over Peinture Chambre Bebe Garcon has viewed by 91248 visitor.
Interior, Piscine Hors Sol Ovale Pas Chere was posted June on this site by Sileka.net. More over Piscine Hors Sol Ovale Pas Chere has viewed by 83564 visitor.
Interior, Piscine Hors Sol 132 Cm was posted June on this site by Sileka.net. More over Piscine Hors Sol 132 Cm has viewed by 66199 visitor.
Interior, Porte De Placard Cuisine Ikea was posted June on this site by Sileka.net. More over Porte De Placard Cuisine Ikea has viewed by 87193 visitor.
Interior, Recherche Credence Pour Cuisine was posted June on this site by Sileka.net. More over Recherche Credence Pour Cuisine has viewed by 18613 visitor.
Interior, Piscine A Balle Playmobil was posted June on this site by Sileka.net. More over Piscine A Balle Playmobil has viewed by 22387 visitor.
Interior, Peinture Sika Piscine was posted June on this site by Sileka.net. More over Peinture Sika Piscine has viewed by 15021 visitor.
Interior, Piscine Le Mans was posted June on this site by Sileka.net. More over Piscine Le Mans has viewed by 42672 visitor.
Interior, Robinet Cuisine Mitigeur was posted June on this site by Sileka.net. More over Robinet Cuisine Mitigeur has viewed by 42227 visitor.
|
__author__ = 'saguinag'+'@'+'nd.edu'
__version__ = "0.1.0"
##
## json_dataset_tograph = convert twitter (json format) dataset to a graph object
## arguments: input file (json)
##
## VersionLog:
# 0.0.1 Initial commit
#
import argparse,traceback,optparse
import urllib, json
import sys
def json_load_byteified(file_handle):
return _byteify(
json.load(file_handle, object_hook=_byteify),
ignore_dicts=True
)
def json_loads_byteified(json_text):
return _byteify(
json.loads(json_text, object_hook=_byteify),
ignore_dicts=True
)
def _byteify(data, ignore_dicts = False):
# if this is a unicode string, return its string representation
if isinstance(data, unicode):
return data.encode('utf-8')
# if this is a list of values, return list of byteified values
if isinstance(data, list):
return [ _byteify(item, ignore_dicts=True) for item in data ]
# if this is a dictionary, return dictionary of byteified keys and values
# but only if we haven't already byteified it
if isinstance(data, dict) and not ignore_dicts:
return {
_byteify(key, ignore_dicts=True): _byteify(value, ignore_dicts=True)
for key, value in data.iteritems()
}
# if it's anything else, return it in its original form
return data
def get_parser():
parser = argparse.ArgumentParser(description='query twitter and output to file')
# parser.add_argument('jsonfile', metavar='JSONFILE', help='Quoted query')
parser.add_argument('--version', action='version', version=__version__)
return parser
def main():
parser = get_parser()
args = vars(parser.parse_args())
#print args
#url = "http://apollo.cse.nd.edu/datasets/paris_shooting.txt"
# url = "http://dsg1.crc.nd.edu/~saguinag/paper_accepted.json"
# response = urllib.urlopen(url)
# data = json.loads(response.read())
# data = response.read()
# for line in data:
infile = "paper_accepted.json"
infile = "datasets/network_models_evolution.json"
data = []
with open(infile) as data_file:
lines = data_file.readlines()
print len(lines)
for l in lines:
d = json.dumps(l)
print type(d)
# print json_loads_byteified(l)
break
# with open(infile) as f:
# ldict = json.load(f)
# print type(ldict)
# # break
if __name__ == '__main__':
main()
sys.exit(0)
|
Miki Pulley offers a wide variety of clutches and brakes to the motion control industry worldwide. With decades of experience, our designs have been tested and refined to deliver unparalleled quality and reliability.
As the name suggests, Electromagnetic Clutches and Brakes function by utilizing the magnetic force generated by the energized coil. These products may be divided between two basic functions: ‘energized’ and ‘de-energized’. In the ‘energized’ type, the clutch or brake is operated when power is supplied to the coil. In the ‘de-energized’ type, the energized coil exerts force on the moveable plate, thereby compressing a series of springs that were otherwise holding the movable plate against the rotor. Then when de-energized, the springs are free to push the plate against the rotor to stop rotation.
Miki Pulley provides quality industrial clutches and brakes to many well-known OEMs throughout the world. Our clutches and brakes have proven their value in the industry, meeting and surpassing customer requirements in terms of quality and service life.
With a wide range of sizes available, you may also choose between shaft or flange mounted types.
The 111 design borrows from a flat-screen brake and clutch. This brake is ideal for sudden stops. There are three different armatures types available, with shaft and flange mounting styles offered.
The BSZ is similar to the 111-12 model brake, but offers a slimmer profile and available in two different armature types.
|
#!/usr/bin/python
# Title: readFile.py
# Description: Contains the grammar and methods for reading a file
# Author: Sawan J. Kapai Harpalani
# Date: 2016-06-26
# Version: 0.1
# Usage: python readFile.py
# Notes:
# python_version: 2.6.6
# License: Copyright 200X Sawan J. Kapai Harpalani
# This file is part of MMOTracker. MMOTracker is free software: you can redistribute it and/or modify
# it under the terms of the GNU GeneralPublic License as published by the Free Software Foundation,
# either version 3 of the License, or (at your option) any later version
# MMOTracker is distributed in the hope that it willbe useful, but WITHOUT ANY WARRANTY; without
# even the implied warranty of MERCHANTABILITY or FITNESSFOR A PARTICULAR PURPOSE. See the GNU General
# PubliLicense for more details.You should have received a copy of the GNU GeneralPublic License along with MMOTracker.
# If not, seehttp://www.gnu.org/licenses/.
#==============================================================================
from pyparsing import *
# Rules
reservedWordReadFile = Suppress(Keyword("readFile"))
fileName = QuotedString('"', escChar = "\\")
leftBracket = Suppress(Literal("("))
rightBracket = Suppress(Literal(")"))
readFileExpr = reservedWordReadFile + leftBracket + fileName.setResultsName("fileName") + rightBracket
# Reads file and returns quoted string to be possible to assign variable
def readFile(fileN):
fp = open(fileN, 'r')
return "\"" + fp.read() + "\""
readFileExpr.setParseAction(lambda tokens: readFile(tokens.fileName))
|
Want More New Business? Turn It Down.
Chasing New Business is an interesting game. I know because I have played it a lot. From both the agency side and the agency review side.
I learn a lot from walking down both sides of the street.
Some agencies chase every shadow. “We’d be perfect for (name the client)“, they always say.
Some agencies say they don’t respond to RFP’s and only work with clients who seek them out or through referrals. Nice place to be.
|
import json
import collections
import matplotlib
matplotlib.use('Agg') # requirement of matplotlib
import matplotlib.pyplot as plt
import numpy as np
from textwrap import wrap
myinput = """[
{
"url":"https://api.etais.ee/api/invoices/9e67980771a94de3bd0075fe84522b05/",
"uuid":"9e67980771a94de3bd0075fe84522b05",
"number":100151,
"customer":"https://api.etais.ee/api/customers/5991d0c109df4e8cab4f9dd660295517/",
"price":"87.7300000",
"tax":"0.0000000",
"total":"87.7300000",
"state":"pending",
"year":2018,
"month":1,
"issuer_details":{
"phone":{
"national_number":"5555555",
"country_code":"372"
},
"account":"123456789",
"country_code":"EE",
"address":"Lille 4-205",
"country":"Estonia",
"company":"OpenNode",
"postal":"80041",
"vat_code":"EE123456789",
"email":"info@opennodecloud.com",
"bank":"Estonian Bank"
},
"invoice_date":null,
"due_date":null,
"customer_details":null,
"openstack_items":[
{
"name":"WaldurChatbot (Small / Generic)",
"price":87.73,
"tax":"0.0000000",
"total":"87.7300000",
"unit_price":"2.8300000",
"unit":"day",
"start":"2017-12-01T00:00:00Z",
"end":"2017-12-31T23:59:59.999999Z",
"product_code":"",
"article_code":"",
"project_name":"Waldur Chatbot testbed",
"project_uuid":"88879e68a4c84f6ea0e05fb9bc59ea8f",
"scope_type":"OpenStack.Tenant",
"scope_uuid":"ed505f9ebd8c491b94c6f8dfc30b54b0",
"package":"https://api.etais.ee/api/openstack-packages/517047bdfefe418899c981663f1ea5f5/",
"tenant_name":"WaldurChatbot",
"tenant_uuid":"ed505f9ebd8c491b94c6f8dfc30b54b0",
"usage_days":31,
"template_name":"Generic",
"template_uuid":"a85daef727d344b3858541e4bc29a274",
"template_category":"Small"
}
],
"offering_items":[
],
"generic_items":[
]
},
{
"url":"https://api.etais.ee/api/invoices/9e67980771a94de3bd0075fe84522b05/",
"uuid":"9e67980771a94de3bd0075fe84522b05",
"number":100151,
"customer":"https://api.etais.ee/api/customers/5991d0c109df4e8cab4f9dd660295517/",
"price":"87.7300000",
"tax":"0.0000000",
"total":"87.7300000",
"state":"pending",
"year":2017,
"month":12,
"issuer_details":{
"phone":{
"national_number":"5555555",
"country_code":"372"
},
"account":"123456789",
"country_code":"EE",
"address":"Lille 4-205",
"country":"Estonia",
"company":"OpenNode",
"postal":"80041",
"vat_code":"EE123456789",
"email":"info@opennodecloud.com",
"bank":"Estonian Bank"
},
"invoice_date":null,
"due_date":null,
"customer_details":null,
"openstack_items":[
{
"name":"WaldurChatbot (Small / Generic)",
"price":87.73,
"tax":"0.0000000",
"total":"87.7300000",
"unit_price":"2.8300000",
"unit":"day",
"start":"2017-12-01T00:00:00Z",
"end":"2017-12-31T23:59:59.999999Z",
"product_code":"",
"article_code":"",
"project_name":"Waldur Chatbot testbed",
"project_uuid":"88879e68a4c84f6ea0e05fb9bc59ea8f",
"scope_type":"OpenStack.Tenant",
"scope_uuid":"ed505f9ebd8c491b94c6f8dfc30b54b0",
"package":"https://api.etais.ee/api/openstack-packages/517047bdfefe418899c981663f1ea5f5/",
"tenant_name":"WaldurChatbot",
"tenant_uuid":"ed505f9ebd8c491b94c6f8dfc30b54b0",
"usage_days":31,
"template_name":"Generic",
"template_uuid":"a85daef727d344b3858541e4bc29a274",
"template_category":"Small"
}
],
"offering_items":[
],
"generic_items":[
]
},
{
"url":"https://api.etais.ee/api/invoices/59fd12a0d3e34f829d6a0eefd2e5ee41/",
"uuid":"59fd12a0d3e34f829d6a0eefd2e5ee41",
"number":100156,
"customer":"https://api.etais.ee/api/customers/0d689685ab3444bbb592338e24613f03/",
"price":"87.7300000",
"tax":"0.0000000",
"total":"87.7300000",
"state":"pending",
"year":2017,
"month":12,
"issuer_details":{
"phone":{
"national_number":"5555555",
"country_code":"372"
},
"account":"123456789",
"country_code":"EE",
"address":"Lille 4-205",
"country":"Estonia",
"company":"OpenNode",
"postal":"80041",
"vat_code":"EE123456789",
"email":"info@opennodecloud.com",
"bank":"Estonian Bank"
},
"invoice_date":null,
"due_date":null,
"customer_details":null,
"openstack_items":[
{
"name":"Waldur Maie cloud (Small / Generic)",
"price":87.73,
"tax":"0.0000000",
"total":"87.7300000",
"unit_price":"2.8300000",
"unit":"day",
"start":"2017-12-01T00:00:00Z",
"end":"2017-12-31T23:59:59.999999Z",
"product_code":"",
"article_code":"",
"project_name":"W-M project",
"project_uuid":"26fc83e64ea0473fb9f57f0ae978b396",
"scope_type":"OpenStack.Tenant",
"scope_uuid":"1571bca1f6594ad3bede4d2c8d64755a",
"package":"https://api.etais.ee/api/openstack-packages/81e93543103b4cf8a5d3658e026e98f3/",
"tenant_name":"Waldur Maie cloud",
"tenant_uuid":"1571bca1f6594ad3bede4d2c8d64755a",
"usage_days":31,
"template_name":"Generic",
"template_uuid":"a85daef727d344b3858541e4bc29a274",
"template_category":"Small"
}
],
"offering_items":[
],
"generic_items":[
]
},
{
"url":"https://api.etais.ee/api/invoices/bb6f38e908e7493791c65b26e88e1619/",
"uuid":"bb6f38e908e7493791c65b26e88e1619",
"number":100121,
"customer":"https://api.etais.ee/api/customers/5991d0c109df4e8cab4f9dd660295517/",
"price":"84.9000000",
"tax":"0.0000000",
"total":"84.9000000",
"state":"created",
"year":2017,
"month":11,
"issuer_details":{
"phone":{
"national_number":"5555555",
"country_code":"372"
},
"account":"123456789",
"country_code":"EE",
"address":"Lille 4-205",
"country":"Estonia",
"company":"OpenNode",
"postal":"80041",
"vat_code":"EE123456789",
"email":"info@opennodecloud.com",
"bank":"Estonian Bank"
},
"invoice_date":"2017-12-01",
"due_date":"2017-12-31",
"customer_details":null,
"openstack_items":[
{
"name":"WaldurChatbot (Small / Generic)",
"price":84.9,
"tax":"0.0000000",
"total":"84.9000000",
"unit_price":"2.8300000",
"unit":"day",
"start":"2017-11-01T00:00:00Z",
"end":"2017-11-30T23:59:59.999999Z",
"product_code":"",
"article_code":"",
"project_name":"Waldur Chatbot testbed",
"project_uuid":"88879e68a4c84f6ea0e05fb9bc59ea8f",
"scope_type":"OpenStack.Tenant",
"scope_uuid":"ed505f9ebd8c491b94c6f8dfc30b54b0",
"package":"https://api.etais.ee/api/openstack-packages/517047bdfefe418899c981663f1ea5f5/",
"tenant_name":"WaldurChatbot",
"tenant_uuid":"ed505f9ebd8c491b94c6f8dfc30b54b0",
"usage_days":30,
"template_name":"Generic",
"template_uuid":"a85daef727d344b3858541e4bc29a274",
"template_category":"Small"
}
],
"offering_items":[
],
"generic_items":[
]
},
{
"url":"https://api.etais.ee/api/invoices/d13cdd4ef4d2478e8e0cf0961d20e6f2/",
"uuid":"d13cdd4ef4d2478e8e0cf0961d20e6f2",
"number":100129,
"customer":"https://api.etais.ee/api/customers/0d689685ab3444bbb592338e24613f03/",
"price":"53.7700000",
"tax":"0.0000000",
"total":"53.7700000",
"state":"created",
"year":2017,
"month":11,
"issuer_details":{
"phone":{
"national_number":"5555555",
"country_code":"372"
},
"account":"123456789",
"country_code":"EE",
"address":"Lille 4-205",
"country":"Estonia",
"company":"OpenNode",
"postal":"80041",
"vat_code":"EE123456789",
"email":"info@opennodecloud.com",
"bank":"Estonian Bank"
},
"invoice_date":"2017-12-01",
"due_date":"2017-12-31",
"customer_details":null,
"openstack_items":[
{
"name":"Waldur Maie cloud (Small / Generic)",
"price":53.77,
"tax":"0.0000000",
"total":"53.7700000",
"unit_price":"2.8300000",
"unit":"day",
"start":"2017-11-12T11:29:21.522230Z",
"end":"2017-11-30T23:59:59.999999Z",
"product_code":"",
"article_code":"",
"project_name":"W-M project",
"project_uuid":"26fc83e64ea0473fb9f57f0ae978b396",
"scope_type":"OpenStack.Tenant",
"scope_uuid":"1571bca1f6594ad3bede4d2c8d64755a",
"package":"https://api.etais.ee/api/openstack-packages/81e93543103b4cf8a5d3658e026e98f3/",
"tenant_name":"Waldur Maie cloud",
"tenant_uuid":"1571bca1f6594ad3bede4d2c8d64755a",
"usage_days":19,
"template_name":"Generic",
"template_uuid":"a85daef727d344b3858541e4bc29a274",
"template_category":"Small"
}
],
"offering_items":[
],
"generic_items":[
]
},
{
"url":"https://api.etais.ee/api/invoices/b094173f50a848e19d3362c84eabebc4/",
"uuid":"b094173f50a848e19d3362c84eabebc4",
"number":100096,
"customer":"https://api.etais.ee/api/customers/5991d0c109df4e8cab4f9dd660295517/",
"price":"87.7300000",
"tax":"0.0000000",
"total":"87.7300000",
"state":"created",
"year":2017,
"month":10,
"issuer_details":{
"phone":{
"national_number":"5555555",
"country_code":"372"
},
"account":"123456789",
"country_code":"EE",
"address":"Lille 4-205",
"country":"Estonia",
"company":"OpenNode",
"postal":"80041",
"vat_code":"EE123456789",
"email":"info@opennodecloud.com",
"bank":"Estonian Bank"
},
"invoice_date":"2017-11-01",
"due_date":"2017-12-01",
"customer_details":null,
"openstack_items":[
{
"name":"WaldurChatbot (Small / Generic)",
"price":87.73,
"tax":"0.0000000",
"total":"87.7300000",
"unit_price":"2.8300000",
"unit":"day",
"start":"2017-10-01T00:00:00Z",
"end":"2017-10-31T23:59:59.999999Z",
"product_code":"",
"article_code":"",
"project_name":"Waldur Chatbot testbed",
"project_uuid":"88879e68a4c84f6ea0e05fb9bc59ea8f",
"scope_type":"OpenStack.Tenant",
"scope_uuid":"ed505f9ebd8c491b94c6f8dfc30b54b0",
"package":"https://api.etais.ee/api/openstack-packages/517047bdfefe418899c981663f1ea5f5/",
"tenant_name":"WaldurChatbot",
"tenant_uuid":"ed505f9ebd8c491b94c6f8dfc30b54b0",
"usage_days":31,
"template_name":"Generic",
"template_uuid":"a85daef727d344b3858541e4bc29a274",
"template_category":"Small"
}
],
"offering_items":[
],
"generic_items":[
]
},
{
"url":"https://api.etais.ee/api/invoices/b636ee1236e0486994cdd1ffda4c7e1d/",
"uuid":"b636ee1236e0486994cdd1ffda4c7e1d",
"number":100076,
"customer":"https://api.etais.ee/api/customers/5991d0c109df4e8cab4f9dd660295517/",
"price":"11.3200000",
"tax":"0.0000000",
"total":"11.3200000",
"state":"created",
"year":2017,
"month":9,
"issuer_details":{
"phone":{
"national_number":"5555555",
"country_code":"372"
},
"account":"123456789",
"country_code":"EE",
"address":"Lille 4-205",
"country":"Estonia",
"company":"OpenNode",
"postal":"80041",
"vat_code":"EE123456789",
"email":"info@opennodecloud.com",
"bank":"Estonian Bank"
},
"invoice_date":"2017-10-01",
"due_date":"2017-10-31",
"customer_details":null,
"openstack_items":[
{
"name":"WaldurChatbot (Small / Generic)",
"price":11.32,
"tax":"0.0000000",
"total":"11.3200000",
"unit_price":"2.8300000",
"unit":"day",
"start":"2017-09-27T13:53:31.425080Z",
"end":"2017-09-30T23:59:59.999999Z",
"product_code":"",
"article_code":"",
"project_name":"Waldur Chatbot testbed",
"project_uuid":"88879e68a4c84f6ea0e05fb9bc59ea8f",
"scope_type":"OpenStack.Tenant",
"scope_uuid":"ed505f9ebd8c491b94c6f8dfc30b54b0",
"package":"https://api.etais.ee/api/openstack-packages/517047bdfefe418899c981663f1ea5f5/",
"tenant_name":"WaldurChatbot",
"tenant_uuid":"ed505f9ebd8c491b94c6f8dfc30b54b0",
"usage_days":4,
"template_name":"Generic",
"template_uuid":"a85daef727d344b3858541e4bc29a274",
"template_category":"Small"
}
],
"offering_items":[
],
"generic_items":[
]
}
]"""
data = json.loads(myinput)
num_to_monthdict = {
1:'Jan',
2:'Feb',
3:'Mar',
4:'Apr',
5:'May',
6:'Jun',
7:'Jul',
8:'Aug',
9:'Sep',
10:'Oct',
11:'Nov',
12:'Dec'
}
plotx = []
ploty = []
uuid = '5991d0c109df4e8cab4f9dd660295517'
customer = 'https://api.etais.ee/api/customers/' + uuid + '/'
newlist = []
print(type(data))
print(type(data[0]))
for i in range((len(data)-1), -1, -1):
if data[i]['customer'] == customer:
newlist.append(data[i])
plotx.append(num_to_monthdict[data[i]['month']] + " " + str(data[i]['year']))
ploty.append(float(data[i]['total']))
print("### " + str(len(newlist)))
'''
result = collections.OrderedDict()
for i in range(len(plotx)):
result[plotx[i]] = float(ploty[i])
'''
print(plotx)
print(ploty)
N = len(ploty)
ind = np.arange(N)
width = 0.35
fig, ax = plt.subplots()
rects1 = ax.bar(ind, ploty, width, color='#75ad58')
ax.set_xlabel('Months')
ax.set_ylabel('Total costs')
ax.set_xticks(ind + width / 2)
ax.set_xticklabels(plotx)
title = ax.set_title("\n".join(wrap('Last ' + str(N) + 'month total costs but then everytime the title gets longer '
'omg like wtf when does it stop OMG HELP well okay'
'let me tell you a story all about how'
'my life got turned upside down'
'so id like to take a moment just sit right there', 60)))
def autolabel(rects, ax):
# Get y-axis height to calculate label position from.
(y_bottom, y_top) = ax.get_ylim()
y_height = y_top - y_bottom
for rect in rects:
height = rect.get_height()
label_position = height + (y_height * 0.01)
ax.text(rect.get_x() + rect.get_width()/2., label_position,
'%d' % int(height),
ha='center', va='bottom')
autolabel(rects1, ax)
print()
counter = 1
for child in ax.get_children():
if counter == N:
child.set_color('#2388d6')
print("HERE:" + str(child))
else:
print(child)
counter += 1
real_invoice = matplotlib.patches.Patch(color='#75ad58', label='Invoice')
estimate_invoice = matplotlib.patches.Patch(color='#2388d6', label='Estimation')
plt.legend(handles=[real_invoice, estimate_invoice])
fig.tight_layout()
title.set_y(1.05)
fig.subplots_adjust(top=0.8)
#plt.show()
fig.savefig('foo.png')
|
Wait...did they move the camera on a different location, or the beach is gone cause of the waves...?
13jan18-- A Thank You to our host, GoKite Cabarete - Kiteboarding School, for sponsoring this 'live cam' view of Kite Beach from GoKite Kiteboarding School, marvelous.
Really nice to see the waves and the gorgeous blue sea, wish there was sound too.
How is it that no tangles occur. This looks like a lot of fun. Must be windy every day there.
|
#!/usr/bin/python
# -*- coding: UTF-8 -*-
__all__ = ['register', 'enable', 'disable', 'get', 'put', 'exists', 'cached', 'Cache']
from functools import wraps
import lltk.config as config
from lltk.helpers import debug, warning
from lltk.exceptions import CacheFatalError
caches = {}
def register(cache):
''' Registers a cache. '''
global caches
name = cache().name
if not caches.has_key(name):
caches[name] = cache
def enable(identifier = None, *args, **kwargs):
''' Enables a specific cache for the current session. Remember that is has to be registered. '''
global cache
if not identifier:
for item in (config['default-caches'] + ['NoCache']):
if caches.has_key(item):
debug('Enabling default cache %s...' % (item,))
cache = caches[item](*args, **kwargs)
if not cache.status():
warning('%s could not be loaded. Is the backend running (%s:%d)?' % (item, cache.server, cache.port))
continue
# This means that the cache backend was set up successfully
break
else:
debug('Cache backend %s is not registered. Are all requirements satisfied?' % (item,))
elif caches.has_key(identifier):
debug('Enabling cache %s...' % (identifier,))
previouscache = cache
cache = caches[identifier](*args, **kwargs)
if not cache.status():
warning('%s could not be loaded. Is the backend running (%s:%d)?' % (identifier, cache.server, cache.port))
cache = previouscache
else:
debug('Cache backend %s is not registered. Are all requirements satisfied?' % (identifier,))
def disable():
''' Disables the cache for the current session. '''
global cache
cache = NoCache()
def connect(self):
''' Establishes the connection to the backend. '''
return cache.connect()
def status(self):
''' Returns True if connection can be established, False otherwise. '''
return cache.status()
def exists(key):
''' Checks if a document is cached. '''
return cache.exists(key)
def get(key):
''' Retrieves a document from the the currently activated cache (by unique identifier). '''
return cache.get(key)
def put(key, value, extradata = {}):
''' Caches a document using the currently activated cache. '''
return cache.put(key, value, extradata)
def delete(key):
''' Remove a document from the cache (by unique identifier). '''
return cache.delete(key)
def commit():
''' Ensures that all changes are committed to disc. '''
return cache.commit()
def cached(key = None, extradata = {}):
''' Decorator used for caching. '''
def decorator(f):
@wraps(f)
def wrapper(*args, **kwargs):
uid = key
if not uid:
from hashlib import md5
arguments = list(args) + [(a, kwargs[a]) for a in sorted(kwargs.keys())]
uid = md5(str(arguments)).hexdigest()
if exists(uid):
debug('Item \'%s\' is cached (%s).' % (uid, cache))
return get(uid)
else:
debug('Item \'%s\' is not cached (%s).' % (uid, cache))
result = f(*args, **kwargs)
debug('Caching result \'%s\' as \'%s\' (%s)...' % (result, uid, cache))
debug('Extra data: ' + (str(extradata) or 'None'))
put(uid, result, extradata)
return result
return wrapper
return decorator
class GenericCache(object):
''' Generic cache class that all custom caches should be derived from. '''
def __init__(self, *args, **kwargs):
self.name = 'Unkown'
self.connection = False
self.server = None
self.port = None
self.user = None
self.database = None
self.filename = None
if kwargs.has_key('server'):
self.server = kwargs['server']
if kwargs.has_key('port'):
self.port = kwargs['port']
if kwargs.has_key('user'):
self.user = kwargs['user']
if kwargs.has_key('database'):
self.database = kwargs['database']
if kwargs.has_key('filename'):
self.filename = kwargs['filename']
def __del__(self):
# self.commit()
pass
def __str__(self):
return '%s cache backend' % (self.name)
@classmethod
def needsconnection(self, f):
''' Decorator used to make sure that the connection has been established. '''
@wraps(f)
def wrapper(self, *args, **kwargs):
if not self.connection:
self.connect()
return f(self, *args, **kwargs)
return wrapper
def setup(self):
''' Runs the initial setup for the cache. '''
pass
def connect(self):
''' Establishes the connection to the backend. '''
pass
def status(self):
''' Returns True if connection can be established, False otherwise. '''
try:
self.connect()
except CacheFatalError:
return False
return True
def exists(self, key):
''' Checks if a document is cached. '''
pass
def get(self, key):
''' Retrieves a document from the cache (by unique identifier). '''
pass
def put(self, key, value, extradata = {}):
''' Caches a document. '''
pass
def delete(self, key):
''' Remove a document from the cache (by unique identifier). '''
pass
def commit(self):
''' Ensures that all changes are committed to disc. '''
pass
class NoCache(GenericCache):
''' Pseudo-class implementing no caching at all. '''
def __init__(self, *args, **kwargs):
super(NoCache, self).__init__()
self.name = 'NoCache'
def exists(self, key):
''' Checks if a document is cached. '''
return False
register(NoCache)
# Setup the NoCache() cache for now...
cache = NoCache()
# Import and register all available caches...
import lltk.caches
# Enable default caches...
enable()
del lltk
|
Eyes floating in unfocused half-sleep drift ashore on an out of place shape; a woman stands at the foot of my bed.
Cool air pulses from the open window, swaying the curtain and her briar-patch of hair on the same slow pendulum frequency. She’s a deeper patch in the dark, and I feel rather than see her movement as she steps onto my ankle, then groin, then chest.
My arms won’t rise from the itchy mattress, my legs won’t kick. The pain of her weight forces a wheeze from my lips that sounds like a sleep sigh. No air returns to my lungs; her feet press them flat.
She crouches. Her breath on my face stinks of opium smoke and rotting meat. I pour all my will into moving my hands, but they lay like dead snails.
A fist’s width to my left Jenny’s undisturbed sleep, her easy breathing, mock me.
The pulse of cold air from the window brushes fetid hair against my face. Something with many legs falls on my forehead, wriggles into my eye, rights itself, and then scurries down my cheek. Her toes on my chest writhe like worms in salt.
Looking into her dark outline, I see a tunnel, like she’s an open door.
Deep beyond that shadowed way burns a single star, blazing transparent light I can’t see, but feel as an ache in my bones and a shiver on my skin.
My chest sears like meat on a stove, and my mouth fills with salt.
Splintered boards make a box about me. Light comes in at the cracks, but it’s so dark I can’t see my own hands where they’re clutched against my chest. I wear some kind of loose, unfamiliar dress far too thin to keep out the pricking splinters. The dress was made to keep out summer heat, not sharp things.
There’s not enough room to unbend my knees. The ache of wanting to grows and grows.
I pick at splinters to try to make a hole, but they make my fingers bleed. I pick at the split between the boards, hoping for more light, but all I see between them are moving green and shadows.
I don’t cry. This surprises me. I’m proud of it.
I hold to something: a book. Mine. They didn’t take it. They won’t. I won’t let them.
The box becomes a room. Bamboo bars make one wall and yellow blocks of stone without mortar form the other three. The floor is stone too, strewn with narrow leaves long dead and sharp as paper. I cover myself in the leaves, waiting for my fate unknown, pretending the dead things can hide me from it.
The dress fades from cherry to the same pale pink as my hands. I had a bow in my hair. I use it to bind my bleeding fingertips when the nails come off from scratching at the base of the pole that holds the door shut.
Beyond the cage waits another closed door, this one of solid wood. Beside it, a tiny window throws a beam of moonlight across the chaos of leaves and my rag-wrapped body. A tiny bird perches in that window, singing a song about summer, but it’s always cold here.
They didn’t take the book. Its pages brim over with the actions of heroes who will not save me. I bury it among the leaves like I’m planting a seed.
I wait for the lock to click. When hunger has weakened me, it does. I wait for the door to open. When I lay curled and defeated, it does. I count those who enter: they are two, a woman and a man, both full grown, both full muscled, both naked except for cloth around their waists and knives at their hips.
The woman speaks in my language, saying that her master has ordered them to release me, telling me I can be clean now, and eat, and that I will be honored by the master, welcomed, loved if I’m worthy.
I wait, counting my breaths.
The boy enters the cage. He crouches and smiles. He offers his hand.
I bite it and taste his blood.
His smile vanishes. He grabs my hair and smashes my head against the floor. The woman holds me down while the boy tears the red dress. He goes to the door, pressing my dress to his bleeding hand.
And the door is opened.
I take her knife off her belt. I cut her three times: rib, hand, and face. She backs away but I follow. Blood flows into her eyes. She kicks me to the floor. A pain in my chest is like what a lightbulb must feel when it breaks. She tries to catch my hair again, but I put the knife into her foot. She falls. Her yell startles birds from the roof.
Many faces fill the doorway. I hurtle myself at them with nothing but my sweat, underwear, and a sharp blade. The knife flashes in moonlight. They fall back.
We dance on a platform green with moss – a huge foundation to a building long gone. On it a dozen houses like the one I escaped make a line pointing to a pagoda three stories tall. All around the pagoda jungle trees and monkeys stand among ruined stones. White moonlight twinkles from a river’s wide meandering, and rushes through waves of almost glowing flowers along its bank and around the ruined foundation so it seems an island in a many-colored sea.
Men and woman circle me. Their laughter sounds like war bells. They are many. I am one.
I saw a mad dog once, encircled by a pack of coyotes that chased it up and down the farm road outside my home. With foam at its teeth, it frenzied this way and that, defiant, wild, and untouchable.
But where I chase, they fall back, and the ones behind me grab. I spin, knife and teeth biting to new directions, but they retreat. I’m small. Smaller by half than the smallest of them. I’m weak. Too young for sturdy muscle. They have knives, but they don’t draw them.
A net catches me. I fall tangled. I’m dragged and jostled, scraped and bruised. I saw at the ropes but one of them catches my hand and another pries the knife from it. They drag me from the net, peel off my smallclothes, and leave me naked.
I kick and bite. I am not afraid. I should be, I know. But instead I feel anger like lightning branching in my body. I feel a cold storm wind howling inside me, frustrated by the closed window that is their physical advantage, but keening to be loosed.
They let me go and stand back.
A huge man approaches from the pagoda. His thick hair flows in a luxurious river over his robe of silk, and his round eyes carry haughty command, but he has a face like a boar and a body like a bear.
His call turns all eyes to him, so I rise. My own hair itches with my sweat and half blocks my sight. My hooked fingers want to claw out his teeth. I try to freeze him with my eyes.
He stops where my knife fell to place the white sole of his foot on its hilt.
I straighten hunched shoulders, and in my mind, I imagine my bare skin is shining armor, my sweat a queen’s jewels, and every strand of my hair is a soldier ready to aid me.
I killed the mad dog with my father’s rook rifle. The coyotes scattered. I was young then too, but not too young to be my own hero.
His grin is too wide. His teeth too white. Cold rushes from him. “My name is Agafya,” he says.
He slides the knife across moss covered stones to stop against my toes.
I watch myself as if through a stranger’s eyes as I bend to lift the blade: This child, this girl, with her blue eyes, pale skin, and wild hair, is not me. I am not her. I am only flesh and blood. She lunges forward like a newborn war god, sprung from loose soil and thunderclouds, hungry for blood and sacrifice.
Yet, in the flash of light from her blade and teeth, in the indomitable animus of her charge, comes a gleam of grace. Her flying feet lift her up. The sea of colored flowers sways on the wind from the river.
The pressure on my chest relents. I breathe. She’s gone. She never was. I never was. I am not.
Dreaming. I can only be dreaming.
A cheek rests against my chest. A woman’s cheek. She yawns, then she opens her eyes. Green. This is Jenny. This is the bed we rented in Bangkok. This is the morning we planned for.
“Good morning,” croaks the green eyed girl.
I can’t reply until after she’s moved off me. I lift my hands, marveling at their strength, their muscle, their almost unfamiliar scars.
“You don’t make a very good pillow.” Jenny pops her neck, then rolls over to check her purse. The last worming shadows of my dream fade away into the world as the warmth she left against my chest cools into the pre-dawn chill.
“I won’t mind if you don’t.” Though I banter, it seems my dream has darkened the morning shadows. The open window feels like a well, calling me down it. Pale light through the trees carries the chill of the moon.
The establishment owner left a bucket of fresh water outside our door. It’s not much to freshen up with but we’re lucky to have it considering the price point.
Cold water brings a shiver. Drops ripple my reflection in the bucket. My eyes look paler than the brown they’ve always been. The dream left my nerves frayed and jingling with weird feelings I don’t trust, but I try to keep the courage she carried. I’ve never felt so brave and I could do to feel that way again.
“Fine.” I’m too shaken to argue.
Jenny’s revolver clicks as she cleans it. A wren lands in the window behind her and watches. When the gun is cleaner than either of us, she nods in satisfaction. Each shell slides soundlessly back into its chamber. She has four; two chambers rest empty.
That dream was absolutely fascinating. Sylvia sounds *awesome*, even as a kid. I love how she decided to be her own hero.
I wonder how that’s happening.
Your Gaiman and Lovecraft is showing very strongly in the first part of the chapter. I was equal parts squirming with discomfort (like those wormy toes) and horribly transfixed with what was happening. Suffering from very bad night terrors, I’m all to familiar with how those waking dreams can seem so real (and am curious as to how real that vision actually was).
Was very intrigued at how the vision felt like a legend being told, could see it very vibrantly in my minds eye with the words you deliberately chose to paint the scene with.
Looking forward to seeing what Jenny and Mark get themselves into next. Their rapport remains strong and one of the things I appreciate quite a bit. Thanks for the update!
|
#!/usr/bin/env python2
from __future__ import print_function
from setuptools import setup
from sys import version_info
if version_info < (3, 5):
requirements = ["pathlib"]
else:
requirements = []
setup(name="mpd_pydb",
author="Wieland Hoffmann",
author_email="themineo@gmail.com",
packages=["mpd_pydb"],
package_dir={"mpd_pydb": "mpd_pydb"},
download_url="https://github.com/mineo/mpd_pydb/tarball/master",
url="http://github.com/mineo/mpd_pydb",
license="MIT",
classifiers=["Development Status :: 4 - Beta",
"License :: OSI Approved :: MIT License",
"Natural Language :: English",
"Operating System :: OS Independent",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.5",
"Programming Language :: Python :: 3.6",],
description="Module for reading an MPD database",
long_description=open("README.rst").read(),
setup_requires=["setuptools_scm", "pytest-runner"],
use_scm_version={"write_to": "mpd_pydb/version.py"},
install_requires=requirements,
extras_require={
'docs': ['sphinx']
},
tests_require=["pytest"],
)
|
Author: Sherretz, Kelly L.; Kelly, Christopher G.; Matos, Rachel A.
Abstract: The 2013 Teacher and Administrator Supply and Demand Survey is an online survey completed by school district personnel directors and charter school administrators. The data were collected through the Delaware Department of Education’s DEEDs system in January and February 2013 for the 2012-2013 school year. The survey was conducted by the University of Delaware’s Institute for Public Administration in conjunction with the Delaware Department of Education (DOE). This is the eleventh year of the study. The study focuses on teacher hiring; teacher hiring difficulties; recruitment strategies and incentives; the reasons for teachers leaving, vacancies, and shortage areas; hiring for non-teaching positions; and administrative hiring and vacancies. New topic areas include hiring of inexperienced teachers, hiring in high-needs schools, the opinions of respondents on attractive features found in their school district or charter school, financial incentives used for recruitment and retention, and retirement projections.
|
#!/usr/bin/env python2
# -*- coding: UTF-8 -*-
import re
import common
class Types(object):
def __iter__(self):
return (x for x in ['feat', 'fix', 'perf', 'revert', 'docs', 'chore', 'style', 'refactor', 'test', 'merge'])
TYPES = Types()
TITLE_PATTERN_REGEX = r'(?P<type>' + '|'.join(TYPES) + ')\((?P<scope>\S+)\): (?P<subject>[a-z].*)'
# return all unpushed commit message
def unpushed_commit_message():
command_result = common.execute_command('git log --branches --not --remotes --pretty=format:%h:%aE:%s')
if command_result.status != 0:
return []
else:
return command_result.out.decode().split('\n')
def commit_in_path(old_path=None, new_path=None):
git_command = 'git log --first-parent --pretty=format:%h:%aE:%s'
if old_path is not None and len(old_path) > 0:
git_command += ' ' + old_path
if new_path is not None and len(new_path) > 0:
git_command += '..' + new_path
command_result = common.execute_command(git_command)
if command_result.status != 0:
return []
else:
return command_result.out.decode().split('\n')
# check the title conformance against commitizen/angularjs/... rules
def __check_commit_title(commit_hash, commit_title):
# Test the title against regex
title_pattern = re.compile(TITLE_PATTERN_REGEX)
title_match = title_pattern.match(commit_title)
# Convert into a boolean
title_have_not_matched = title_match is None
if title_have_not_matched is True:
common.error(
"Commit '"
+ commit_hash
+ "' with title '"
+ commit_title
+ "' does not follow Sheldon rules: '<" + "|".join(TYPES) + ">(<scope>): <subject>'.")
return title_have_not_matched
# check that the author is not anonymous
def __check_commit_author(commit_hash, commit_author):
# Test the title against regex
author_pattern = re.compile(r'\.*anonymous\.*')
author_match = author_pattern.match(commit_author.lower())
# Convert into a boolean
author_have_matched = author_match is not None
if author_have_matched is True:
common.error(
"Commit '"
+ commit_hash
+ "' has anonymous author.")
return author_have_matched
def check_commit_messages(commit_messages):
results = [False]
for commit_message in commit_messages:
# Split commit message according to "--pretty=format:%h:%aE:%s"
split_message = commit_message.split(':', 2)
if len(split_message) == 3:
# Extract the type
commit_hash = split_message[0]
commit_author = split_message[1]
commit_title = split_message[2]
results.append(__check_commit_title(commit_hash, commit_title))
results.append(__check_commit_author(commit_hash, commit_author))
common.note('%d commit(s) checked, %d error(s) found.' % (len(commit_messages), results.count(True)))
return results
|
Post Equipment designed and fabricated backstop assembly. Give us a call today.
Heavy Duty,Quality,Long Lasting replacement motors and many other parts IN STOCK ready for you.
***NEW*** Cutting auger, 540 CV PTO, right hand side discharge, 36″ folding conveyor with magnets, light kit, 10″ rubber extension, EZ400 scale.
|
#!/usr/bin/env python
import os
import yaml
NO_INDEX = set(['PL', 'Others', 'Topics', 'Roadmap'])
def load_mkdocs():
filename = 'mkdocs.yml'
with open(filename) as f:
return yaml.load(f.read())
def make_index(docs):
groups = docs['pages']
for group in groups:
topic = group.keys()[0]
_pages = group[topic]
if topic not in NO_INDEX and isinstance(_pages, list):
pages = []
for _page in _pages:
page = _page.items()[0]
if 'index.md' not in page[1]:
path = page[1]
new_page = (page[0], path.split('/')[1])
pages.append(new_page)
write_index(topic, pages)
def write_index(topic, pages):
index = os.path.join('docs', topic.lower(), 'index.md')
title = '### **%s**' % topic
contents = '\n'.join(map(map_page, pages))
document = '\n\n'.join([title, contents])
with open(index, 'w') as f:
f.write(document)
def map_page(page):
"""
('Chapter 1. Title', 'foo/ch1.md') => '[Chapter 1. Title](foo/ch1.md)'
"""
return '* [%s](%s)' % (page[0], page[1])
docs = load_mkdocs()
make_index(docs)
|
A Borderline Mom: I Love My Mom!!!
My mom is like the greatest person. Her mom was like the greatest person who ever lived. She was the only person to date that has made me feel like no matter what I did she loved me to pieces. I can tell that my mom shares a piece of my grandma's spirit. I love that in her.
No matter what our differences in opinions are, I feel sometimes that my mom and I are more alike than she would care to admit and I think my mom is an excellent role model. I would love to be revered the same way that she is professionally. So many people tell me that they love my mom and they miss her or enjoy working with her. It makes me feel good to know that she has worked so hard to get to that position.
I know that sometimes she wishes she had been a little more involved with us as we were growing up, but hey, I know plenty of moms that were stay at home whose kids won't even speak to them now! So, I'd say she did a pretty good job because I think she's a wonderful person, has a beautiful soul and is someone who I aspire to be like professionally some day soon!
|
# This is a test program for the DirectLoadFlowCalculator present in the
# gridsim.electricalnetwork module. It computes the example given in
# http://home.eng.iastate.edu/~jdm/ee553/DCPowerFlowEquations.pdf pp. 10-15 and
# compare results with those of the reference.
import unittest
import numpy as np
from gridsim.electrical.loadflow import DirectLoadFlowCalculator
class TestDLF2(unittest.TestCase):
def test_reference(self):
# network set-up
# ----------------
# boolean array specifying which bus is a PV bus
# has to be a one-dimensional boolean numpy array
is_PV = np.array([False, True, False, True])
# array giving from-bus and to-bus ids for each branch
# b12, b13, b14, b23, b34
b = np.array([[0, 1], [0, 2], [0, 3], [1, 2], [2, 3]])
# array containing branch admittances
Yb = np.zeros((5, 4), dtype=complex)
yT = [1j * (-10.), 1j * (-10.), 1j * (-10.), 1j * (-10.), 1j * (-10.)]
for i_branch in range(0, 5):
Yb[i_branch, 0] = yT[i_branch]
Yb[i_branch, 1] = yT[i_branch]
Yb[i_branch, 2] = yT[i_branch]
Yb[i_branch, 3] = yT[i_branch]
# calculator initialization
# --------------------------
s_base = 1.0
v_base = 1.0
dlf = DirectLoadFlowCalculator()
# dlf = NewtonRaphsonLoadFlowCalculator()
dlf.update(s_base, v_base, is_PV, b, Yb)
# input buses electrical values
# ------------------------------
# P, Q, V, Th can be either numpy 1-D arrays or 2-D arrays with 1 row,
# respectively 1 column
# P1,P2,P3,P4, slack power can be set to any value, e.g. float('NaN')
P = np.array([float('NaN'), 2. - 1., -4., 1.])
Q = np.array([float('NaN'), 0., 0., 0.])
# mutable variable is needed
V = np.ones([4])
# mutable variable is needed
Th = np.zeros([4])
# compute buses other electrical values
# --------------------------------------
[P, Q, V, Th] = dlf.calculate(P, Q, V, Th, True)
# check results against reference values
p_slack = P[0]
# print "P_slack ="
# print p_slack
refslack = 2.
self.assertEqual(p_slack, refslack, "The power of the slack bus is "
+ str(p_slack) + " instead of " + str(refslack))
# print "Th = "
# print Th
ref_Th = np.array([0., -0.025, -0.15, -0.025]) # Th1,Th2,Th3,Th4
self.assertTrue(np.allclose(Th, ref_Th))
# get branch currents
# ---------------------------------
[Pij, Qij, Pji, Qji] = dlf.get_branch_power_flows(True)
# check results against reference values
# print "Pij = "
# print Pij
# P12,P13,P14, P23,P34
ref_Pij = np.array([0.25, 1.5, 0.25, 1.25, -1.25])
self.assertTrue(np.allclose(Pij, ref_Pij))
if __name__ == '__main__':
unittest.main()
|
This final chapter looks back at the arguments presented in this book and envisages possible directions of work to come. Will the initial effort to use skeptical, critical, or alternative approaches to the field of private international law have a future? Where does this future lie? Clearly, this heterodox project calls for some consolidation. An alternative would be to maintain a largely theoretical stance, if only through the importation within the largely conservative private international legal academy, of viewpoints developed outside the mainstream in other fields. There remains two functions for the conflict of laws to fulfill. The first is to spell out its legal regime. The second involves the tool-kit of private international law.
|
from pi3bar.plugins.base import Plugin
import datetime
class Clock(Plugin):
"""
:class:`pi3bar.app.Pi3Bar` plugin to show the current date and time.
:param full_format: :class:`str` - :meth:`datetime.datetime.strftime` argument
:param short_format: :class:`str` - :meth:`datetime.datetime.strftime` argument, short alternative
:param timezone: :class:`str` - timezone to use. E.g. 'US/Pacific' or 'Europe/Berlin'
Examples:
.. code-block:: python
# default formats:
Clock(full_format='%Y-%m-%d %H:%M:%S', short_format='%d. %H:%M:%S')
#
Clock(full_format='%d.%m.%Y %H:%M:%S'), # other format example
# pass a timezone
Clock(timezone='US/Eastern')
"""
#: Refresh every second
ticks = 1
def __init__(self, full_format='%Y-%m-%d %H:%M:%S', short_format='%d. %H:%M', timezone=None, **kwargs):
self.full_format = full_format
self.short_format = short_format
self.timezone = timezone
self.instance = timezone
super(Clock, self).__init__(**kwargs)
def cycle(self):
now = datetime.datetime.now(self.timezone)
self.full_text = now.strftime(self.full_format)
self.short_text = now.strftime(self.short_format)
|
Dassault Systems declared the on of “Quarry Aught Weakness,” an energy decipherment familiarity providing an mainstreamed and unlocked collaborative medium to qualify adjust defects crosswise the unreserved fallout incident procedure, the assemblage says.
The creative discovery helps companies seize and weight existent incorporated awareness, conjunctive visualize, profession and urbanized disciplines beyond a fellowship’s whole ecosystem of partners and suppliers. It includes understood envisage and representation altogether stages of spin-off start, from conceptual lay out to 1 built-up, including each domains much as figure, powertrain, and cadaver and inward.
In the service of added report, come to see Dassault Systems.
Sources: Force materials conventional from the associates and increased data gleaned from the attendance’s site.
|
from django.test import TestCase
from ..models import *
from ..tests.factory import *
from django.test.client import Client
from ..views.api import *
# funzt bisher alles
class GroupTest(TestCase):
def test_serializeAll(self):
# =================================================================
# tests the api showAllGroups
# =================================================================
self.maxDiff = None
group1 = UserFactory.createGroup(10)
group2 = UserFactory.createGroup(10)
result = GroupSerializer.serializeAll()
groupMembers1 = list()
for m1 in Membership.objects.filter(group=group1):
groupMembers1.append(m1.user.username)
groupMembers2 = list()
for m2 in Membership.objects.filter(group=group2):
groupMembers2.append(m2.user.username)
# =================================================================
# test the name of the group is the same groupname in the result
# =================================================================
self.assertTrue(group1.name in [group["name"] for group in result["groups"]])
self.assertTrue(group2.name in [group["name"] for group in result["groups"]])
# =================================================================
# test the users in the group are the same users in the result
# =================================================================
for group in result["groups"]:
if group["name"] == group1.name:
self.assertEquals(groupMembers1, group["users"])
break
elif group["name"] == group2.name:
self.assertEquals(groupMembers2, group["users"])
# =================================================================
# test the quantity of the result is correct
# =================================================================
length = 0
for array in [group["users"] for group in result["groups"]]:
length += len(array)
self.assertEquals(length, 20)
# =================================================================
# test the tableCreator and groupCreator are False
# =================================================================
for group in result["groups"]:
if group["name"] == group1.name:
self.assertFalse(group["tableCreator"])
elif group["name"] == group2.name:
self.assertFalse(group["tableCreator"])
def test_serializeOne(self):
group = UserFactory.createGroup(10)
result = GroupSerializer.serializeOne(group)
# =================================================================
# tests the users in the result are the same users in the group
# =================================================================
groupMember = list()
for m in Membership.objects.filter(group=group):
groupMember.append(m.user.username)
#for user in groupMember:
# self.assertTrue(user in result["users"])
# =================================================================
# test the quantity of the result is correct
# =================================================================
self.assertEquals(len(result["users"]), 10)
def test_groups(self):
group = UserFactory.createGroup(10)
c = Client()
|
Comments: Left: Black khatyrkite with spinel, olivine and augite (whitish material). Right: High-resolution image (collection through a transmission electron microscope) of a region of about 15 nanometers showing 5 fold symmetry.
Location: Khatirskii ultramafic zone of the Koryak-Kamchata fold area, Koryak Mountains, Russia..
|
# Copyright (c) 2017 The WebRTC project authors. All Rights Reserved.
#
# Use of this source code is governed by a BSD-style license
# that can be found in the LICENSE file in the root of the source
# tree. An additional intellectual property rights grant can be found
# in the file PATENTS. All contributing project authors may
# be found in the AUTHORS file in the root of the source tree.
"""Unit tests for the echo path simulation module.
"""
import shutil
import os
import tempfile
import unittest
import pydub
from . import echo_path_simulation
from . import echo_path_simulation_factory
from . import signal_processing
class TestEchoPathSimulators(unittest.TestCase):
"""Unit tests for the eval_scores module.
"""
def setUp(self):
"""Creates temporary data."""
self._tmp_path = tempfile.mkdtemp()
# Create and save white noise.
silence = pydub.AudioSegment.silent(duration=1000, frame_rate=48000)
white_noise = signal_processing.SignalProcessingUtils.GenerateWhiteNoise(
silence)
self._audio_track_num_samples = (
signal_processing.SignalProcessingUtils.CountSamples(white_noise))
self._audio_track_filepath = os.path.join(self._tmp_path, 'white_noise.wav')
signal_processing.SignalProcessingUtils.SaveWav(
self._audio_track_filepath, white_noise)
# Make a copy the white noise audio track file; it will be used by
# echo_path_simulation.RecordedEchoPathSimulator.
shutil.copy(self._audio_track_filepath, os.path.join(
self._tmp_path, 'white_noise_echo.wav'))
def tearDown(self):
"""Recursively deletes temporary folders."""
shutil.rmtree(self._tmp_path)
def testRegisteredClasses(self):
# Check that there is at least one registered echo path simulator.
registered_classes = (
echo_path_simulation.EchoPathSimulator.REGISTERED_CLASSES)
self.assertIsInstance(registered_classes, dict)
self.assertGreater(len(registered_classes), 0)
# Instance factory.
factory = echo_path_simulation_factory.EchoPathSimulatorFactory()
# Try each registered echo path simulator.
for echo_path_simulator_name in registered_classes:
simulator = factory.GetInstance(
echo_path_simulator_class=registered_classes[
echo_path_simulator_name],
render_input_filepath=self._audio_track_filepath)
echo_filepath = simulator.Simulate(self._tmp_path)
if echo_filepath is None:
self.assertEqual(echo_path_simulation.NoEchoPathSimulator.NAME,
echo_path_simulator_name)
# No other tests in this case.
continue
# Check that the echo audio track file exists and its length is greater or
# equal to that of the render audio track.
self.assertTrue(os.path.exists(echo_filepath))
echo = signal_processing.SignalProcessingUtils.LoadWav(echo_filepath)
self.assertGreaterEqual(
signal_processing.SignalProcessingUtils.CountSamples(echo),
self._audio_track_num_samples)
|
Get in touch with our Sidney Wedding Car Rentals team.
Getting married in Sidney and searching for the best Wedding Car Rentals in town? Look no further - we’ve got everything you need to feel like royalty on your special day!
At Wedding Car Rental we offer a choice of Wedding Car Service across a range of packages, so it’s easy to tailor our service to suit you. Keen to find out more about our deals on Wedding Car Rentals in Sidney? Get in touch today.
Wedding Car Rental in Sidney and New York. Get a great deal for Wedding Car Service in Sidney. Wedding Car Rental is the one of the leading supplier of Wedding Car Rentals in the Sidney area. A huge range of Wedding Car Rental services including Classic Wedding Cars, Modern Wedding Cars, Sports Cars, Supercars, Limousines and many more.
|
#!/usr/bin/python
# -*- coding: utf-8 -*-
from __future__ import with_statement
import sys
import traceback
import StringIO
import logging
logging.basicConfig(level=logging.WARNING)
logger = logging.getLogger("bmgraph.file")
class NotImplementedError(Exception):
pass
class mdict(dict):
"""This class implements a multi-value dictionary."""
def __setitem__(self, key, value):
self.setdefault(key, []).append(value)
def __delitem__(self, key):
raise NotImplementedError("del not supported for mdict, use .delete(k, v) instead")
def delete(self, key, value):
self[key].remove(value)
def __str__(self):
return unicode(self).encode('ASCII', 'backslashreplace')
def __unicode__(self):
pairs = []
for key, values in self.items():
for value in values:
pairs.append("%s=%s" % (key, value.replace(" ", "+")))
return u" ".join(pairs)
class GraphSink(object):
def special_node_read(self, node_name, node_type):
pass
def edge_read(self, node1_name, node1_type,
node2_name, node2_type, type, attribute_dict):
pass
def node_attributes_read(self, node_name, node_type, attribute_dict):
pass
def comment_read(self, type, value):
pass
class GraphObjectSink(GraphSink):
'''Sink for an in-memory Graph object. If passed a graph object as the kw
param graph, append to that Graph.'''
def __init__(self, graph=None):
if graph != None:
self.graph = graph
else:
self.graph = Graph()
def special_node_read(self, node_name, node_type):
self.graph.get_node(node_name, node_type).special_node = True
def edge_read(self, node1_name, node1_type, node2_name, node2_type,
type, attribute_dict):
n1 = self.graph.get_node(node1_name, node1_type)
n2 = self.graph.get_node(node2_name, node2_type)
e = n1.add_edge(n2)
e.type = type
for k, v in attribute_dict.iteritems():
e.attributes[k] = v
def node_attributes_read(self, node_name, node_type, attribute_dict):
n = self.graph.get_node(node_name, node_type)
for k, v in attribute_dict.iteritems():
n.attributes[k] = v
def get_object(self):
return self.graph
class Graph(object):
def __init__(self):
self.attributes = mdict()
self.nodes = {}
self.comments = []
def add_node(self, node):
self.nodes[node.name] = node
def del_node(self, node):
del self.nodes[node.name]
def get_node(self, name, type):
if self.nodes.has_key(name):
return self.nodes[name]
else:
n = Node(self, name, type)
return n
def __str__(self):
print "called"
return unicode(self).encode('ASCII', 'backslashreplace')
def __unicode__(self):
ret = []
for node in self.nodes.values():
if node.special_node:
ret.append(unicode(node))
specials_written = True
for comment in self.comments:
ret.append(u"# %s" % unicode(comment))
comments_written = True
written_edges = set([])
for node in self.nodes.values():
for edge in node.edges:
if unicode(edge) in written_edges:
continue
ret.append(unicode(edge))
written_edges.add(unicode(edge))
for node in self.nodes.values():
if len(node.attributes.keys()) == 0:
continue
ret.append(u"# _attributes %s %s" % (unicode(node), unicode(node.attributes)))
ret.append(u'')
return u'\n'.join(ret)
class Edge(object):
def __init__(self, n1, n2):
self.attributes = mdict()
self.n1 = n1
self.n2 = n2
def other(self, node):
if node == self.n1:
return self.n2
return self.n1
def __cmp__(self, other):
return (str(self) == str(other))
def __str__(self):
return unicode(self).encode('ASCII', 'backslashreplace')
def __unicode__(self):
return u"%s %s %s %s" % (self.n1, self.n2, self.type, self.attributes)
def __repr__(self):
return "<Edge %s>" % str(self)
class Node(object):
def __init__(self, graph, name, type):
self.graph = graph
self.attributes = mdict()
self.name = name
self.type = type
self.special_node = False
self.edges = []
self.graph.add_node(self)
def add_edge(self, other):
e = Edge(self, other)
self.edges.append(e)
other.edges.append(e)
return e
def remove_edge(self, edge):
self.edges.remove(edge)
def delete(self):
self.graph.del_node(self)
for edge in self.edges:
other = edge.other(self)
other.remove_edge(edge)
def __cmp__(self, other):
return (str(self) == str(other))
def __str__(self):
return unicode(self).encode('ASCII', 'backslashreplace')
def __unicode__(self):
if self.type:
return u"%s_%s" % (self.type, self.name)
return self.name
def __repr__(self):
return "<Node %s>" % str(self)
def read_file(stream, sink):
lines_read = 0
for line in stream:
lines_read += 1
if logger.isEnabledFor(logging.INFO):
if lines_read % 10000 == 0:
logger.info("Read %i lines..." % lines_read)
else:
logger.debug("Read %i lines..." % lines_read)
if len(line) < 1:
continue
# Decode early
try:
pass
# line = line.decode('utf-8', 'replace')
except Exception, e:
print lines_read, line.replace("\n", "")
traceback.print_exc()
raise e
if line[0] == '#':
comment_type, value = line[2:].split(" ", 1)
# only handles node attributes atm...
if comment_type == "_attributes":
node, attributes = value.split(" ", 1)
parts = node.split('_', 1)
if len(parts) == 1:
node_name = parts[0]
node_type = None
else:
node_name = parts[1]
node_type = parts[0]
attributes = attributes.split(" ")
attr_dict = {}
for attribute in attributes:
try:
key, value = attribute.split("=", 1)
attr_dict[key] = value.replace("\n", "").replace("+", " ")
except ValueError, ve:
logger.warning("Line %i: error parsing attribute %s" % (lines_read, attribute))
logger.warning(traceback.format_exc())
sink.node_attributes_read(node_name, node_type, attr_dict)
else:
sink.comment_read(comment_type, value.replace("\n", "").replace("+", " "))
else:
parts = line.split(" ", 2)
if len(parts) == 1:
if parts[0].strip() == "":
continue
parts = parts[0].replace("\n", "").split("_", 1)
if len(parts) == 1:
sink.special_node_read(parts[0], None)
else:
sink.special_node_read(parts[1], parts[0])
if len(parts) == 3:
attr_dict = {}
edge_attributes = parts[2].replace("\n", "").split(" ")
type = edge_attributes[0]
if len(edge_attributes) > 0:
for attr in edge_attributes[1:]:
try:
key, value = attr.split("=", 1)
attr_dict[key] = value.replace("+", " ")
except ValueError, ve:
logger.warning("Line %i: error parsing attribute %s" % (lines_read, attr))
logger.warning(traceback.format_exc())
n1_parts = parts[0].split('_', 1)
if len(n1_parts) == 1:
n1_name = n1_parts[0]
n1_type = None
else:
n1_name = n1_parts[1]
n1_type = n1_parts[0]
n2_parts = parts[1].split('_', 1)
if len(n2_parts) == 1:
n2_name = n2_parts[0]
n2_type = None
else:
n2_name = n2_parts[1]
n2_type = n2_parts[0]
sink.edge_read(n1_name, n1_type, n2_name, n2_type,
type, attr_dict)
def read_string(string, sink):
return read_file(StringIO.StringIO(string), sink)
def main(args):
if len(args) > 0:
s = bmgraph_file.GraphObjectSink()
for arg in args:
try:
bmgraph_file.read_file(arg, s)
except:
traceback.print_exc()
print s.get_object()
else:
print "Please run test.py to run tests."
if __name__ == '__main__':
main(sys.argv[1:])
|
Paprika in name and in essence! We're talking about a medium toned orangey earth colour which is sweet but has a spicy aftertaste. It's a transition shade with an adherent texture that enfolds the look in a timeless and sumptuous warmth. If you love sensual and warm shades, you need to put your hands on Paprika! Try it on the entire eyelid for a velvety and super warm smokey eye!
INGREDIENTS: talc, zea mays (corn) starch, zinc stearate, mica, silica, dimethicone, isononyl isononanoate, caprylic/capric triglyceride, chlorphenesin, dimethiconol, potassium sorbate, synthetic beeswax. +/−: ci 77491 - ci 77492 - ci 77499 (iron oxides).
|
# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
import grpc
import encoder_pb2 as encoder__pb2
class EncoderStub(object):
# missing associated documentation comment in .proto file
pass
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.encode = channel.unary_unary(
'/Encoder/encode',
request_serializer=encoder__pb2.EncodeRequest.SerializeToString,
response_deserializer=encoder__pb2.EncodeResponse.FromString,
)
self.decode = channel.unary_unary(
'/Encoder/decode',
request_serializer=encoder__pb2.DecodeRequest.SerializeToString,
response_deserializer=encoder__pb2.DecodeResponse.FromString,
)
class EncoderServicer(object):
# missing associated documentation comment in .proto file
pass
def encode(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def decode(self, request, context):
# missing associated documentation comment in .proto file
pass
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_EncoderServicer_to_server(servicer, server):
rpc_method_handlers = {
'encode': grpc.unary_unary_rpc_method_handler(
servicer.encode,
request_deserializer=encoder__pb2.EncodeRequest.FromString,
response_serializer=encoder__pb2.EncodeResponse.SerializeToString,
),
'decode': grpc.unary_unary_rpc_method_handler(
servicer.decode,
request_deserializer=encoder__pb2.DecodeRequest.FromString,
response_serializer=encoder__pb2.DecodeResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'Encoder', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
|
Government still learning lessons from ransomware that hit 300,000 PCs worldwide and took down dozens of NHS trusts.
The WannaCry attack warranted a meeting of the government's Cobra crisis committee.
The WannaCry ransomware attack was the biggest test of the year for the UK's new cybersecurity body.
The National Cyber Security Centre's (NCSC) annual review marks a year since it started work, although it was officially opened in February. In those 12 months, the NCSC says 1,131 cyber incidents have been reported to it.
Parliamentary personnel using 'weak passwords' have had email accounts compromised, and it remains unclear whether MPs, Lords and their staff use two-factor authentication.
Of those, 590 were classed as significant cyber attacks, ranging from attacks on National Health Service hospitals and the Houses of Parliament, through to attacks on businesses.
Thirty of these incidents were deemed sufficiently serious to require the NCSC, which is part of GCHQ, to coordinate a cross-government response. Of these, the WannaCry attack was considered so significant that it warranted a meeting of the government's Cobra crisis committee.
Other cyberattacks which required a cross-government response included the Tesco Bank hack -- which saw a total of £2.5 million stolen from 9,000 customers -- and June's Petya ransomware outbreak.
The WannaCry epidemic saw ransomware spread with the help of a leaked NSA exploit and infected over 300,000 PCs at major organisations around the globe. The NHS was one of the highest profile victims of the attack, with 47 trusts and foundation trusts affected. The ransomware forced a number of hospitals offline and some took weeks to recover.
The NCSC's response to WannaCry involved a record number of cybersecurity professionals sharing information and NCSC experts deployed to sites which had fallen victim.
The NCSC continues to work with government departments to identify vulnerabilities and which data should be backed up so as to not become irrecoverable should a similar attack occur in future.
The organisation also led a government review of lessons learned from the incident, including the need for increased collaboration with law enforcement and improving the resilience of NHS networks.
The NCSC continues to investigate who carried out the attack. While the culprit hasn't officially been identified, security services on both sides of the Atlantic suspect the attack was launched by hackers in North Korea.
The NCSC report notes that despite the body's best efforts, it can't prevent every attack. At the same time, it aims to deliver a "world-class incident management service" and ensure all the necessary provisions are in place to react to incidents.
"The threat remains very real and growing -- further attacks will happen and there is much more for us to do," said Ciaran Martin, CEO of the NCSC.
While the report cites WannaCry as the biggest test for the NCSC so far, the body's technical director Ian Levy recently warned that it's likely only a matter of time before an attack which makes WannaCry looks like small fry hits the UK.
GCHQ's new arm will test strategies and schemes for businesses and industry to follow in future.
WannaCry, a ransomware program that has hit hundreds of thousands of machines in the past week, leverages a Windows exploit used by the NSA that was leaked to the internet in April 2017.
|
"""
Git Hook. The Simple python way to code Hooks
Hook is the base class for every Hook.
AUTHOR:
Gael Magnan de bornier
"""
import sys
import os
import re
from tempfile import NamedTemporaryFile
from contextlib import contextmanager
from src.Utils import Bash
from src.Tasks import HookTask
class Hook(object):
def __init__(self, tasks=None, conf_location="", exclude=None):
if tasks is None:
tasks = []
if exclude is None:
exclude = []
self.exclude = exclude
self.tasks = tasks
self.conf_location = conf_location
def main_process(self):
"""Main function"""
kwargs = self.get_exec_params()
result = self.process(**kwargs)
if result:
sys.exit(0)
sys.exit(1)
def get_script_params(self, **kwargs):
return {'${0}'.format(i): arg for i, arg in enumerate(sys.argv)}
def get_line_params(self, **kwargs):
return {}
def get_files_params(self, **kwargs):
return self.get_files_grouped_by_change(**kwargs)
def get_exec_params(self):
"""Reads the inputs to get execution parameters"""
params = {'conf_location': self.conf_location}
params.update(self.get_script_params(**params))
params.update(self.get_line_params(**params))
params.update(self.get_files_params(**params))
return params
def process(self, **kwargs):
"""Main treatment, execute the tasks return False if any task fails,
true otherwise"""
try:
tasks = self.get_tasks_group_by_type()
return self.execute_tasks_group_by_type(*tasks, **kwargs)
except Exception as e:
print("An error occured during the runing of the script, "
"please report this following message to you administrator.")
print(e)
return False
def get_tasks_group_by_type(self):
""" This method return the tasks group by execution context,
the groups should be the ones used by execute_tasks_group_by_type"""
general_tasks = []
new_file_task = []
modified_file_task = []
deleted_file_task = []
for task in self.tasks:
if issubclass(task, HookTask.HookNewOrModifiedFileTask):
new_file_task.append(task)
modified_file_task.append(task)
elif issubclass(task, HookTask.HookNewFileTask):
new_file_task.append(task)
elif issubclass(task, HookTask.HookModifiedFileTask):
modified_file_task.append(task)
elif issubclass(task, HookTask.HookDeletedFileTask):
deleted_file_task.append(task)
elif issubclass(task, HookTask.HookFileTask):
new_file_task.append(task)
modified_file_task.append(task)
deleted_file_task.append(task)
else:
general_tasks.append(task)
return (general_tasks, new_file_task, modified_file_task,
deleted_file_task)
def execute_tasks_group_by_type(self, general_tasks, new_file_task,
modified_file_task, deleted_file_task,
**kwargs):
"""The tasks are executed with different context depending on their type
The HookFileTasks are executed on specific files depending on the
changes the file encountered
Other tasks are executing as general statements"""
for task in general_tasks:
if not task().execute(**kwargs):
return False
if new_file_task or modified_file_task or deleted_file_task:
for file_type in ['new_files', 'modified_files', 'deleted_files']:
files_to_check = kwargs[file_type]
for filename in files_to_check:
exclusion_matchs = [ x for x in self.exclude if re.match(x, filename)]
if exclusion_matchs:
print( "{0} ignored because it matches: {1}".format( filename, exclusion_matchs ) )
continue
if(file_type != "deleted_files" and
len(new_file_task) + len(modified_file_task) > 0):
try:
file_val = self.get_file(filename, **kwargs)
except:
print("Could not read %s" % filename)
return False
with self.get_temp_file() as tmp:
try:
self.write_file_value_in_file(file_val, tmp)
except:
print("Could not write %s " % filename)
return False
if file_type == "new_files":
for task in new_file_task:
if not task().execute(file_desc=tmp,
filename=filename,
file_value=file_val,
**kwargs):
return False
elif file_type == "modified_files":
for task in modified_file_task:
if not task().execute(file_desc=tmp,
filename=filename,
file_value=file_val,
**kwargs):
return False
else:
for task in deleted_file_task:
if not task().execute(filename=filename, **kwargs):
return False
return True
def get_file(self, filename, **kwargs):
pass
def get_file_diffs(self, **kwargs):
pass
@contextmanager
def get_temp_file(self, mode="r+"):
f = NamedTemporaryFile(mode=mode, delete=False)
try:
yield f
finally:
try:
os.unlink(f.name)
except OSError:
pass
def write_file_value_in_file(self, file_value, file_desc):
if file_value:
file_desc.write("\n".join(file_value))
file_desc.flush()
else:
raise Exception()
def get_files_grouped_by_change(self, **kwargs):
added = []
modified = []
deleted = []
file_diffs = self.get_file_diffs(**kwargs)
for line in file_diffs:
if len(line) < 3:
continue
mode, filename = self.get_mode_and_filname(line)
if mode == "A":
added.append(filename)
elif mode == "M":
modified.append(filename)
elif mode == "D":
deleted.append(filename)
return {'new_files': added,
'modified_files': modified,
'deleted_files': deleted}
def get_mode_and_filname(self, line):
try:
mode, filename = line.split()
return mode, filename
except:
line_splited = line.split()
if len(line_splited) > 2:
mode = line_splited[0]
filename = line.replace(mode, "", 1)
return mode, filename
else:
print("An error occured while trying to split:{0}"
" Please warn and adminitrator ".format(line))
def main(_klass, tasks=None, conf_location="", exclude=None):
if issubclass(_klass, Hook):
hook = _klass(tasks, conf_location, exclude)
hook.main_process()
else:
print("Not a valid class, should inherit from Hook")
sys.exit(1)
sys.exit(0)
|
• Be used as the basis of any complaint or disciplinary hearing and action following our bodies’ respective complaints procedures.
It is hoped that this Code will not only impact the members of the signatory bodies but will also have a wider influence by inspiring the work of people who are not currently members of any of the signatories but carry out coaching and mentoring related activities.
The five current signatories welcome other coaching, mentoring and supervision professional bodies and associations to become signatories to this Global Code of Ethics.
|
# Copyright 2014 ETH Zurich
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
:mod:`base` --- Base path server
================================
"""
# Stdlib
import logging
import random
import threading
from collections import defaultdict, deque
from abc import ABCMeta, abstractmethod
from threading import Lock
# External packages
from external.expiring_dict import ExpiringDict
# SCION
from lib.crypto.hash_tree import ConnectedHashTree
from lib.crypto.symcrypto import crypto_hash
from lib.defines import (
HASHTREE_EPOCH_TIME,
HASHTREE_TTL,
PATH_SERVICE,
)
from lib.log import add_formatter, Rfc3339Formatter
from lib.path_seg_meta import PathSegMeta
from lib.packet.path_mgmt.rev_info import RevocationInfo
from lib.packet.path_mgmt.seg_recs import PathRecordsReply, PathSegmentRecords
from lib.packet.scmp.types import SCMPClass, SCMPPathClass
from lib.packet.svc import SVCType
from lib.path_db import DBResult, PathSegmentDB
from lib.rev_cache import RevCache
from lib.thread import thread_safety_net
from lib.types import (
CertMgmtType,
PathMgmtType as PMT,
PathSegmentType as PST,
PayloadClass,
)
from lib.util import SCIONTime, sleep_interval
from lib.zk.cache import ZkSharedCache
from lib.zk.errors import ZkNoConnection
from lib.zk.id import ZkID
from lib.zk.zk import ZK_LOCK_SUCCESS, Zookeeper
from scion_elem.scion_elem import SCIONElement
class PathServer(SCIONElement, metaclass=ABCMeta):
"""
The SCION Path Server.
"""
SERVICE_TYPE = PATH_SERVICE
MAX_SEG_NO = 5 # TODO: replace by config variable.
# ZK path for incoming PATHs
ZK_PATH_CACHE_PATH = "path_cache"
# ZK path for incoming REVs
ZK_REV_CACHE_PATH = "rev_cache"
# Max number of segments per propagation packet
PROP_LIMIT = 5
# Max number of segments per ZK cache entry
ZK_SHARE_LIMIT = 10
# Time to store revocations in zookeeper
ZK_REV_OBJ_MAX_AGE = HASHTREE_EPOCH_TIME
def __init__(self, server_id, conf_dir):
"""
:param str server_id: server identifier.
:param str conf_dir: configuration directory.
"""
super().__init__(server_id, conf_dir)
self.down_segments = PathSegmentDB(max_res_no=self.MAX_SEG_NO)
self.core_segments = PathSegmentDB(max_res_no=self.MAX_SEG_NO)
self.pending_req = defaultdict(list) # Dict of pending requests.
self.pen_req_lock = threading.Lock()
self._request_logger = None
# Used when l/cPS doesn't have up/dw-path.
self.waiting_targets = defaultdict(list)
self.revocations = RevCache()
# A mapping from (hash tree root of AS, IFID) to segments
self.htroot_if2seg = ExpiringDict(1000, HASHTREE_TTL)
self.htroot_if2seglock = Lock()
self.CTRL_PLD_CLASS_MAP = {
PayloadClass.PATH: {
PMT.REQUEST: self.path_resolution,
PMT.REPLY: self.handle_path_segment_record,
PMT.REG: self.handle_path_segment_record,
PMT.REVOCATION: self._handle_revocation,
PMT.SYNC: self.handle_path_segment_record,
},
PayloadClass.CERT: {
CertMgmtType.CERT_CHAIN_REQ: self.process_cert_chain_request,
CertMgmtType.CERT_CHAIN_REPLY: self.process_cert_chain_reply,
CertMgmtType.TRC_REPLY: self.process_trc_reply,
CertMgmtType.TRC_REQ: self.process_trc_request,
},
}
self.SCMP_PLD_CLASS_MAP = {
SCMPClass.PATH: {
SCMPPathClass.REVOKED_IF: self._handle_scmp_revocation,
},
}
self._segs_to_zk = deque()
self._revs_to_zk = deque()
self._zkid = ZkID.from_values(self.addr.isd_as, self.id,
[(self.addr.host, self._port)])
self.zk = Zookeeper(self.topology.isd_as, PATH_SERVICE,
self._zkid.copy().pack(), self.topology.zookeepers)
self.zk.retry("Joining party", self.zk.party_setup)
self.path_cache = ZkSharedCache(self.zk, self.ZK_PATH_CACHE_PATH,
self._handle_paths_from_zk)
self.rev_cache = ZkSharedCache(self.zk, self.ZK_REV_CACHE_PATH,
self._rev_entries_handler)
self._init_request_logger()
def worker(self):
"""
Worker thread that takes care of reading shared paths from ZK, and
handling master election for core servers.
"""
worker_cycle = 1.0
start = SCIONTime.get_time()
while self.run_flag.is_set():
sleep_interval(start, worker_cycle, "cPS.worker cycle",
self._quiet_startup())
start = SCIONTime.get_time()
try:
self.zk.wait_connected()
self.path_cache.process()
self.rev_cache.process()
# Try to become a master.
ret = self.zk.get_lock(lock_timeout=0, conn_timeout=0)
if ret: # Either got the lock, or already had it.
if ret == ZK_LOCK_SUCCESS:
logging.info("Became master")
self.path_cache.expire(self.config.propagation_time * 10)
self.rev_cache.expire(self.ZK_REV_OBJ_MAX_AGE)
except ZkNoConnection:
logging.warning('worker(): ZkNoConnection')
pass
self._update_master()
self._propagate_and_sync()
self._handle_pending_requests()
def _update_master(self):
pass
def _rev_entries_handler(self, raw_entries):
for raw in raw_entries:
rev_info = RevocationInfo.from_raw(raw)
self._remove_revoked_segments(rev_info)
def _add_rev_mappings(self, pcb):
"""
Add if revocation token to segment ID mappings.
"""
segment_id = pcb.get_hops_hash()
with self.htroot_if2seglock:
for asm in pcb.iter_asms():
hof = asm.pcbm(0).hof()
egress_h = (asm.p.hashTreeRoot, hof.egress_if)
self.htroot_if2seg.setdefault(egress_h, set()).add(segment_id)
ingress_h = (asm.p.hashTreeRoot, hof.ingress_if)
self.htroot_if2seg.setdefault(ingress_h, set()).add(segment_id)
@abstractmethod
def _handle_up_segment_record(self, pcb, **kwargs):
raise NotImplementedError
@abstractmethod
def _handle_down_segment_record(self, pcb, **kwargs):
raise NotImplementedError
@abstractmethod
def _handle_core_segment_record(self, pcb, **kwargs):
raise NotImplementedError
def _add_segment(self, pcb, seg_db, name, reverse=False):
res = seg_db.update(pcb, reverse=reverse)
if res == DBResult.ENTRY_ADDED:
self._add_rev_mappings(pcb)
logging.info("%s-Segment registered: %s", name, pcb.short_id())
return True
elif res == DBResult.ENTRY_UPDATED:
self._add_rev_mappings(pcb)
logging.debug("%s-Segment updated: %s", name, pcb.short_id())
return False
def _handle_scmp_revocation(self, pld, meta):
rev_info = RevocationInfo.from_raw(pld.info.rev_info)
self._handle_revocation(rev_info, meta)
def _handle_revocation(self, rev_info, meta):
"""
Handles a revocation of a segment, interface or hop.
:param rev_info: The RevocationInfo object.
"""
assert isinstance(rev_info, RevocationInfo)
if not self._validate_revocation(rev_info):
return
if meta.ia[0] != self.addr.isd_as[0]:
logging.info("Dropping revocation received from a different ISD. Src: %s RevInfo: %s" %
(meta, rev_info.short_desc()))
return
if rev_info in self.revocations:
return False
self.revocations.add(rev_info)
logging.debug("Received revocation from %s: %s", meta, rev_info.short_desc())
self._revs_to_zk.append(rev_info.copy().pack()) # have to pack copy
# Remove segments that contain the revoked interface.
self._remove_revoked_segments(rev_info)
# Forward revocation to other path servers.
self._forward_revocation(rev_info, meta)
def _remove_revoked_segments(self, rev_info):
"""
Try the previous and next hashes as possible astokens,
and delete any segment that matches
:param rev_info: The revocation info
:type rev_info: RevocationInfo
"""
if not ConnectedHashTree.verify_epoch(rev_info.p.epoch):
return
(hash01, hash12) = ConnectedHashTree.get_possible_hashes(rev_info)
if_id = rev_info.p.ifID
with self.htroot_if2seglock:
down_segs_removed = 0
core_segs_removed = 0
up_segs_removed = 0
for h in (hash01, hash12):
for sid in self.htroot_if2seg.pop((h, if_id), []):
if self.down_segments.delete(sid) == DBResult.ENTRY_DELETED:
down_segs_removed += 1
if self.core_segments.delete(sid) == DBResult.ENTRY_DELETED:
core_segs_removed += 1
if not self.topology.is_core_as:
if (self.up_segments.delete(sid) ==
DBResult.ENTRY_DELETED):
up_segs_removed += 1
logging.debug("Removed segments revoked by [%s]: UP: %d DOWN: %d CORE: %d" %
(rev_info.short_desc(), up_segs_removed, down_segs_removed,
core_segs_removed))
@abstractmethod
def _forward_revocation(self, rev_info, meta):
"""
Forwards a revocation to other path servers that need to be notified.
:param rev_info: The RevInfo object.
:param meta: The MessageMeta object.
"""
raise NotImplementedError
def _send_path_segments(self, req, meta, logger, up=None, core=None, down=None):
"""
Sends path-segments to requester (depending on Path Server's location).
"""
up = up or set()
core = core or set()
down = down or set()
all_segs = up | core | down
if not all_segs:
logger.warning("No segments to send for request: %s from: %s" %
(req.short_desc(), meta))
return
revs_to_add = self._peer_revs_for_segs(all_segs)
pld = PathRecordsReply.from_values(
{PST.UP: up, PST.CORE: core, PST.DOWN: down},
revs_to_add
)
self.send_meta(pld, meta)
logger.info("Sending PATH_REPLY with %d segment(s).", len(all_segs))
def _peer_revs_for_segs(self, segs):
"""Returns a list of peer revocations for segments in 'segs'."""
def _handle_one_seg(seg):
for asm in seg.iter_asms():
for pcbm in asm.iter_pcbms(1):
hof = pcbm.hof()
for if_id in [hof.ingress_if, hof.egress_if]:
rev_info = self.revocations.get((asm.isd_as(), if_id))
if rev_info:
revs_to_add.add(rev_info.copy())
return
revs_to_add = set()
for seg in segs:
_handle_one_seg(seg)
return list(revs_to_add)
def _handle_pending_requests(self):
rem_keys = []
# Serve pending requests.
with self.pen_req_lock:
for key in self.pending_req:
to_remove = []
for req, meta, logger in self.pending_req[key]:
if self.path_resolution(req, meta, new_request=False, logger=logger):
meta.close()
to_remove.append((req, meta, logger))
# Clean state.
for req_meta in to_remove:
self.pending_req[key].remove(req_meta)
if not self.pending_req[key]:
rem_keys.append(key)
for key in rem_keys:
del self.pending_req[key]
def _handle_paths_from_zk(self, raw_entries):
"""
Handles cached paths through ZK, passed as a list.
"""
for raw in raw_entries:
recs = PathSegmentRecords.from_raw(raw)
for type_, pcb in recs.iter_pcbs():
seg_meta = PathSegMeta(pcb, self.continue_seg_processing,
type_=type_, params={'from_zk': True})
self._process_path_seg(seg_meta)
if raw_entries:
logging.debug("Processed %s segments from ZK", len(raw_entries))
def handle_path_segment_record(self, seg_recs, meta):
"""
Handles paths received from the network.
"""
params = self._dispatch_params(seg_recs, meta)
# Add revocations for peer interfaces included in the path segments.
for rev_info in seg_recs.iter_rev_infos():
self.revocations.add(rev_info)
# Verify pcbs and process them
for type_, pcb in seg_recs.iter_pcbs():
seg_meta = PathSegMeta(pcb, self.continue_seg_processing, meta,
type_, params)
self._process_path_seg(seg_meta)
def continue_seg_processing(self, seg_meta):
"""
For every path segment(that can be verified) received from the network
or ZK this function gets called to continue the processing for the
segment.
The segment is added to pathdb and pending requests are checked.
"""
pcb = seg_meta.seg
logging.debug("Successfully verified PCB %s" % pcb.short_id())
type_ = seg_meta.type
params = seg_meta.params
self.handle_ext(pcb)
self._dispatch_segment_record(type_, pcb, **params)
self._handle_pending_requests()
def handle_ext(self, pcb):
"""
Handle beacon extensions.
"""
# Handle PCB extensions:
if pcb.is_sibra():
# TODO(Sezer): Implement sibra extension handling
logging.debug("%s", pcb.sibra_ext)
for asm in pcb.iter_asms():
pol = asm.routing_pol_ext()
if pol:
self.handle_routing_pol_ext(pol)
def handle_routing_pol_ext(self, ext):
# TODO(Sezer): Implement extension handling
logging.debug("Routing policy extension: %s" % ext)
def _dispatch_segment_record(self, type_, seg, **kwargs):
# Check that segment does not contain a revoked interface.
if not self._validate_segment(seg):
return
handle_map = {
PST.UP: self._handle_up_segment_record,
PST.CORE: self._handle_core_segment_record,
PST.DOWN: self._handle_down_segment_record,
}
handle_map[type_](seg, **kwargs)
def _validate_segment(self, seg):
"""
Check segment for revoked upstream/downstream interfaces.
:param seg: The PathSegment object.
:return: False, if the path segment contains a revoked upstream/
downstream interface (not peer). True otherwise.
"""
for asm in seg.iter_asms():
pcbm = asm.pcbm(0)
for if_id in [pcbm.p.inIF, pcbm.p.outIF]:
rev_info = self.revocations.get((asm.isd_as(), if_id))
if rev_info:
logging.debug("Found revoked interface (%d, %s) in segment %s." %
(rev_info.p.ifID, rev_info.isd_as(), seg.short_desc()))
return False
return True
def _dispatch_params(self, pld, meta):
return {}
def _propagate_and_sync(self):
self._share_via_zk()
self._share_revs_via_zk()
def _gen_prop_recs(self, queue, limit=PROP_LIMIT):
count = 0
pcbs = defaultdict(list)
while queue:
count += 1
type_, pcb = queue.popleft()
pcbs[type_].append(pcb.copy())
if count >= limit:
yield(pcbs)
count = 0
pcbs = defaultdict(list)
if pcbs:
yield(pcbs)
@abstractmethod
def path_resolution(self, path_request, meta, new_request=True, logger=None):
"""
Handles all types of path request.
"""
raise NotImplementedError
def _handle_waiting_targets(self, pcb):
"""
Handle any queries that are waiting for a path to any core AS in an ISD.
"""
dst_ia = pcb.first_ia()
if not self.is_core_as(dst_ia):
logging.warning("Invalid waiting target, not a core AS: %s", dst_ia)
return
self._send_waiting_queries(dst_ia[0], pcb)
def _send_waiting_queries(self, dst_isd, pcb):
targets = self.waiting_targets[dst_isd]
if not targets:
return
path = pcb.get_path(reverse_direction=True)
src_ia = pcb.first_ia()
while targets:
(seg_req, logger) = targets.pop(0)
meta = self._build_meta(ia=src_ia, path=path, host=SVCType.PS_A, reuse=True)
self.send_meta(seg_req, meta)
logger.info("Waiting request (%s) sent to %s via %s",
seg_req.short_desc(), meta, pcb.short_desc())
def _share_via_zk(self):
if not self._segs_to_zk:
return
logging.info("Sharing %d segment(s) via ZK", len(self._segs_to_zk))
for pcb_dict in self._gen_prop_recs(self._segs_to_zk,
limit=self.ZK_SHARE_LIMIT):
seg_recs = PathSegmentRecords.from_values(pcb_dict)
self._zk_write(seg_recs.pack())
def _share_revs_via_zk(self):
if not self._revs_to_zk:
return
logging.info("Sharing %d revocation(s) via ZK", len(self._revs_to_zk))
while self._revs_to_zk:
self._zk_write_rev(self._revs_to_zk.popleft())
def _zk_write(self, data):
hash_ = crypto_hash(data).hex()
try:
self.path_cache.store("%s-%s" % (hash_, SCIONTime.get_time()), data)
except ZkNoConnection:
logging.warning("Unable to store segment(s) in shared path: "
"no connection to ZK")
def _zk_write_rev(self, data):
hash_ = crypto_hash(data).hex()
try:
self.rev_cache.store("%s-%s" % (hash_, SCIONTime.get_time()), data)
except ZkNoConnection:
logging.warning("Unable to store revocation(s) in shared path: "
"no connection to ZK")
def _init_request_logger(self):
"""
Initializes the request logger.
"""
self._request_logger = logging.getLogger("RequestLogger")
# Create new formatter to include the random request id and the request in the log.
formatter = formatter = Rfc3339Formatter(
"%(asctime)s [%(levelname)s] (%(threadName)s) %(message)s "
"{id=%(id)s, req=%(req)s, from=%(from)s}")
add_formatter('RequestLogger', formatter)
def get_request_logger(self, req, meta):
"""
Returns a logger adapter for 'req'.
"""
# Random ID to relate log entries for a request.
req_id = "%08x" % random.randint(0, 2**32 - 1)
# Create a logger for the request to log with context.
return logging.LoggerAdapter(
self._request_logger, {"id": req_id, "req": req.short_desc(), "from": str(meta)})
def run(self):
"""
Run an instance of the Path Server.
"""
threading.Thread(
target=thread_safety_net, args=(self.worker,),
name="PS.worker", daemon=True).start()
threading.Thread(
target=thread_safety_net, args=(self._check_trc_cert_reqs,),
name="Elem.check_trc_cert_reqs", daemon=True).start()
super().run()
|
Rebel heart gives a fresh perspective to the friends to enemies to lovers trope; and honestly I am hear for every page of it. Told in dual POV, we are given the story of AJ and Brock, once childhood friends that turned sour as they grew older and drifted apart. However, they are brought back together when AJ is announced as Brock’s new English tutor for the semester.
Instantly fun and sweet, Farlow weaves an incredibly romantic story of two stars coming back together against all adversity. This story made me giggle and swoon and I immediately connected to AJ and her refusal to be anyone else but her true self–and let’s face it her quick wit and quips were probably the highlight for me of the entire novel. Super quick and romantic–rebel heart is a swoon worthy read that will have you smiling ear to ear from start to finish.
|
import time
from functools import wraps
def retry(ExceptionToCheck, tries=4, delay=3, backoff=2, logger=None):
"""Retry calling the decorated function using an exponential backoff."""
"""Copyright (c) 2013, SaltyCrane
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the
distribution.
* Neither the name of the SaltyCrane nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
http://www.saltycrane.com/blog/2009/11/trying-out-retry-decorator-python/
original from: http://wiki.python.org/moin/PythonDecoratorLibrary#Retry
:param ExceptionToCheck: the exception to check. may be a tuple of
exceptions to check
:type ExceptionToCheck: Exception or tuple
:param tries: number of times to try (not retry) before giving up
:type tries: int
:param delay: initial delay between retries in seconds
:type delay: int
:param backoff: backoff multiplier e.g. value of 2 will double the delay
each retry
:type backoff: int
:param logger: logger to use. If None, print
:type logger: logging.Logger instance
"""
def deco_retry(f):
@wraps(f)
def f_retry(*args, **kwargs):
mtries, mdelay = tries, delay
while mtries > 1:
try:
return f(*args, **kwargs)
except ExceptionToCheck as e:
msg = "%s, Retrying in %d seconds..." % (str(e), mdelay)
if logger:
logger.warning(msg)
else:
print (msg)
time.sleep(mdelay)
mtries -= 1
mdelay *= backoff
return f(*args, **kwargs)
return f_retry # true decorator
return deco_retry
|
The Haenens have lived on Aruba since 2006, where Elie and his wife, Brenda, manage the two vacation homes and four long-term rentals they own. Elie is a true European bon vivant, who is fully enjoying the Caribbean life, together with Brenda and their two children. The two vacation homes, Villa HopiBon and Villa BibaBon. The Haenens are pleased and proud to rent them out. No request is too large or too small to help you enjoy your stay. Your hosts will greet you at the airport with refreshments. When you arrive at the villa, the refrigerator will have already been filled with food so you can directly start with your vacation. The Haenens live on the same street, so they are always readily available for advice and assistance, doing whatever it takes to make their guests’ stay a dream vacation. They start by making sure that each villa is sparkling clean when guests arrive, with all the comforts of home set up and ready to go. They know what it takes to make you feel welcome from the minute you set foot on Aruba, providing old-fashioned hospitality and taking care of your initial food and drink stocks. The Haenen family takes the motto “Bon Bini” to heart and views it as their duty for make your stay one big party! Come visit Aruba and enjoy a wonderful summer vacation, while getting to know the fantastically friendly Haenen family. They guarantee that you will have a fabulous time in world-class accommodations.
|
# -*- coding: utf-8; -*-
#
# This file is part of Superdesk.
#
# Copyright 2013, 2014 Sourcefabric z.u. and contributors.
#
# For the full copyright and license information, please see the
# AUTHORS and LICENSE files distributed with this source code, or
# at https://www.sourcefabric.org/superdesk/license
from superdesk.notification import push_notification
from superdesk.resource import Resource
from apps.archive.common import on_create_item
from superdesk.services import BaseService
import superdesk
def init_app(app):
endpoint_name = 'planning'
service = PlanningService(endpoint_name, backend=superdesk.get_backend())
PlanningResource(endpoint_name, app=app, service=service)
class PlanningResource(Resource):
schema = {
'guid': {
'type': 'string',
'unique': True
},
'language': {
'type': 'string'
},
'headline': {
'type': 'string'
},
'slugline': {
'type': 'string'
},
'description_text': {
'type': 'string',
'nullable': True
},
'firstcreated': {
'type': 'datetime'
},
'urgency': {
'type': 'integer'
},
'desk': Resource.rel('desks', True)
}
item_url = 'regex("[\w,.:-]+")'
datasource = {'search_backend': 'elastic'}
resource_methods = ['GET', 'POST']
privileges = {'POST': 'planning', 'PATCH': 'planning'}
class PlanningService(BaseService):
def on_create(self, docs):
on_create_item(docs)
def on_created(self, docs):
push_notification('planning', created=1)
def on_updated(self, updates, original):
push_notification('planning', updated=1)
def on_deleted(self, doc):
push_notification('planning', deleted=1)
superdesk.privilege(name='planning',
label='Planning Management',
description='User can plan and cover.')
|
SmartUVC is what sets Tru-D apart.
While other disinfection systems may deliver some UVC, they don’t compensate for room variables. Only Tru-D measures, analyzes and delivers the proper, dose of UVC energy to consistently and effectively disinfect every room and every surface, even in high-touch, shadowed areas.
Other UVC disinfection methods that rely on a fixed cycle time and multiple positions around the room provide inefficient disinfection and missed areas. Tru-D’s Sensor360® minimizes that risk by calculating the time needed to react to room variables – such as size, geometry, surface reflectivity and the amount and location of equipment in the room. Tru-D effectively delivers a lethal dose during a single cycle from a single, central location in the room.
Other UVC offerings disinfect only line-of-sight surfaces very close to the machine and recommend multiple positions.
Even with the three placements depicted, you can see the majority of the room is not disinfected.
Studies show that Tru-D disinfects an entire room from a single location. This is accomplished by measuring the proper dose of UVC energy that reaches the walls and is reflected back to the center of the room.
Tru-D provides total room disinfection.
|
# Author: Yuriy Ilchenko (ilchenko@physics.utexas.edu)
# Compare two ROC curves from scikit-learn and from TMVA (using skTMVA converter)
import os
import sys
if os.environ['TERM'] == 'xterm':
os.environ['TERM'] = 'vt100'
# Now it's OK to import readline :)
# Import ROOT libraries
import ROOT
import array
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import roc_curve
from sklearn import tree
import cPickle
import numpy as np
from numpy.random import RandomState
RNG = RandomState(45)
# Construct an example dataset for binary classification
n_vars = 2
n_events = 300
signal = RNG.multivariate_normal(
np.ones(n_vars), np.diag(np.ones(n_vars)), n_events)
background = RNG.multivariate_normal(
np.ones(n_vars) * -1, np.diag(np.ones(n_vars)), n_events)
X = np.concatenate([signal, background])
y = np.ones(X.shape[0])
w = RNG.randint(1, 10, n_events * 2)
y[signal.shape[0]:] *= -1
permute = RNG.permutation(y.shape[0])
X = X[permute]
y = y[permute]
# Some print-out
print "Event numbers total:", 2 * n_events
# Plot the testing points
c1 = ROOT.TCanvas("c1","Testing Dataset",200,10,700,500)
c1.cd()
plot_colors = (ROOT.kRed, ROOT.kBlue)
mg = ROOT.TMultiGraph()
for i, n, c in zip([-1, 1], ('Class A', 'Class B'), plot_colors):
idx = np.where(y == i)
n = len(idx[0])
g = ROOT.TGraph(n,X[idx, 0][0],X[idx, 1][0])
g.SetMarkerColor(c)
g.SetMarkerStyle(8)
g.SetMarkerSize(0.5)
mg.Add(g)
mg.Draw("ap p")
mg.SetTitle("Testing dataset")
mg.GetXaxis().SetTitle("var1")
mg.GetYaxis().SetTitle("var2")
c1.Update()
c1.Modified()
# Use all dataset for testing
X_test, y_test, w_test = X, y, w
# sklearn, get BDT from pickle file
fid = open('bdt_sklearn_to_tmva_example.pkl', 'rb')
bdt = cPickle.load(fid)
# create TMVA reader
reader = ROOT.TMVA.Reader()
var1 = array.array('f',[0.])
reader.AddVariable("var1", var1)
var2 = array.array('f',[0.])
reader.AddVariable("var2", var2)
# TMVA, get BDT from the xml file
reader.BookMVA("BDT", "bdt_sklearn_to_tmva_example.xml")
# List for numpy arrays
sk_y_predicted =[]
tmva_y_predicted =[]
# Number of events
n = X.shape[0]
# Iterate over events
# Note: this is not the fastest way for sklearn
# but most representative, I believe
for i in xrange(n):
if (i % 100 == 0) and (i != 0):
print "Event %i" % i
var1[0] = X.item((i,0))
var2[0] = X.item((i,1))
# sklearn score
score = bdt.decision_function([var1[0], var2[0]]).item(0)
# calculate the value of the classifier with TMVA/TskMVA
bdtOutput = reader.EvaluateMVA("BDT")
# save skleanr and TMVA BDT output scores
sk_y_predicted.append(score)
tmva_y_predicted.append(bdtOutput)
# Convert arrays to numpy arrays
sk_y_predicted = np.array(sk_y_predicted)
tmva_y_predicted = np.array(tmva_y_predicted)
# Calculate ROC curves
fpr_sk, tpr_sk, _ = roc_curve(y_test, sk_y_predicted)
fpr_tmva, tpr_tmva, _ = roc_curve(y_test, tmva_y_predicted)
# Derive signal efficiencies and background rejections
# for sklearn and TMVA
sig_eff_sk = array.array('f', [rate for rate in tpr_sk])
bkg_rej_sk = array.array('f',[ (1-rate) for rate in fpr_sk])
sig_eff_tmva = array.array('f', [rate for rate in tpr_tmva])
bkg_rej_tmva = array.array('f',[ (1-rate) for rate in fpr_tmva])
# Stack for keeping plots
#plots = []
c2 = ROOT.TCanvas("c2","A Simple Graph Example",200,10,700,500)
c2.cd()
# Draw ROC-curve for sklearn
g1 = ROOT.TGraph(len(sig_eff_sk), sig_eff_sk, bkg_rej_sk)
g1.GetXaxis().SetRangeUser(0.0,1.0)
g1.GetYaxis().SetRangeUser(0.0,1.0)
g1.SetName("g1")
g1.SetTitle("ROC curve")
g1.SetLineStyle(3)
g1.SetLineColor(ROOT.kBlue)
g1.Draw("AL") # draw TGraph with no marker dots
# Draw ROC-curve for skTMVA
g2 = ROOT.TGraph(len(fpr_tmva), sig_eff_tmva, bkg_rej_tmva)
g2.GetXaxis().SetRangeUser(0.0,1.0)
g2.GetYaxis().SetRangeUser(0.0,1.0)
g2.SetName("g2")
g2.SetTitle("ROC curve")
g2.SetLineStyle(7)
g2.SetLineColor(ROOT.kRed)
g2.Draw("SAME") # draw TGraph with no marker dots
leg = ROOT.TLegend(0.4,0.35,0.7,0.2)
#leg.SetHeader("ROC curve")
leg.AddEntry("g1","sklearn","l")
leg.AddEntry("g2","skTMVA","l")
leg.Draw()
c2.Update()
c2.Modified()
## Draw ROC curves
#plt.figure()
#
#plt.plot(fpr_sk, tpr_sk, 'b-', label='scikit-learn bdt.predict()')
#plt.plot(fpr_tmva, tpr_tmva, 'r--', label='TMVA reader.EvaluateMVA("BDT")')
#
#plt.plot([0, 1], [0, 1], 'k--')
#plt.xlim([0.0, 1.0])
#plt.ylim([0.0, 1.05])
#plt.xlabel('False Positive Rate')
#plt.ylabel('True Positive Rate')
#plt.title('Simple ROC-curve comparison')
#
#plt.legend(loc="lower right")
#
#plt.savefig("roc_bdt_curves.png", dpi=96)
|
The picture shows A Lasting Impressions purple colour changing formula. It starts out a light green powder then changes to a bright purple when you add water then starts to lighten when you need to put the hand or foot in it then finally ends up green telling you can take the hand or foot out of the mould. It's ready to cast.
hypo allergenic, silica free and dustless.
Alginate is a powder made from seaweed. I mix it with warm water and it turns into a mixture like the consistency of extra thick cake batter, then after a minute or two (just enough time to mix it and put the hand or foot into it) it turns into a rubbery mould. Dentists use this product to take impressions of our teeth. When I first started my lifecasting business the only alginate I could find was from dental distributors. These alginates were design for taking impressions of our teeth and have flavouring added to them plus the setting time was way too fast. The other problem was that it was very expensive.
It is very safe, odorless, warm, comfortable, non-toxic and hypo allergenic.
|
# Copyright 2016 Intel Corporation
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""fMRI Simulator example script
Example script to generate a run of a participant's data. This generates
data representing a pair of conditions that are then combined
Authors: Cameron Ellis (Princeton) 2016
"""
import logging
import numpy as np
from brainiak.utils import fmrisim as sim
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D # noqa: F401
import nibabel
logger = logging.getLogger(__name__)
# Inputs for generate_signal
dimensions = np.array([64, 64, 36]) # What is the size of the brain
feature_size = [9, 4, 9, 9]
feature_type = ['loop', 'cube', 'cavity', 'sphere']
coordinates_A = np.array(
[[32, 32, 18], [26, 32, 18], [32, 26, 18], [32, 32, 12]])
coordinates_B = np.array(
[[32, 32, 18], [38, 32, 18], [32, 38, 18], [32, 32, 24]])
signal_magnitude = [1, 0.5, 0.25, -1] # In percent signal change
# Inputs for generate_stimfunction
onsets_A = [10, 30, 50, 70, 90]
onsets_B = [0, 20, 40, 60, 80]
event_durations = [6]
tr_duration = 2
temporal_res = 1000.0 # How many elements per second are there
duration = 100
# Specify a name to save this generated volume.
savename = 'examples/utils/example.nii'
# Generate a volume representing the location and quality of the signal
volume_signal_A = sim.generate_signal(dimensions=dimensions,
feature_coordinates=coordinates_A,
feature_type=feature_type,
feature_size=feature_size,
signal_magnitude=signal_magnitude,
)
volume_signal_B = sim.generate_signal(dimensions=dimensions,
feature_coordinates=coordinates_B,
feature_type=feature_type,
feature_size=feature_size,
signal_magnitude=signal_magnitude,
)
# Visualize the signal that was generated for condition A
fig = plt.figure()
sim.plot_brain(fig,
volume_signal_A)
plt.show()
# Create the time course for the signal to be generated
stimfunction_A = sim.generate_stimfunction(onsets=onsets_A,
event_durations=event_durations,
total_time=duration,
temporal_resolution=temporal_res,
)
stimfunction_B = sim.generate_stimfunction(onsets=onsets_B,
event_durations=event_durations,
total_time=duration,
temporal_resolution=temporal_res,
)
# Convolve the HRF with the stimulus sequence
signal_function_A = sim.convolve_hrf(stimfunction=stimfunction_A,
tr_duration=tr_duration,
temporal_resolution=temporal_res,
)
signal_function_B = sim.convolve_hrf(stimfunction=stimfunction_B,
tr_duration=tr_duration,
temporal_resolution=temporal_res,
)
# Multiply the HRF timecourse with the signal
signal_A = sim.apply_signal(signal_function=signal_function_A,
volume_signal=volume_signal_A,
)
signal_B = sim.apply_signal(signal_function=signal_function_B,
volume_signal=volume_signal_B,
)
# Combine the signals from the two conditions
signal = signal_A + signal_B
# Combine the stim functions
stimfunction = list(np.add(stimfunction_A, stimfunction_B))
stimfunction_tr = stimfunction[::int(tr_duration * temporal_res)]
# Generate the mask of the signal
mask, template = sim.mask_brain(signal, mask_threshold=0.2)
# Mask the signal to the shape of a brain (attenuates signal according to grey
# matter likelihood)
signal *= mask.reshape(dimensions[0], dimensions[1], dimensions[2], 1)
# Generate original noise dict for comparison later
orig_noise_dict = sim._noise_dict_update({})
# Create the noise volumes (using the default parameters
noise = sim.generate_noise(dimensions=dimensions,
stimfunction_tr=stimfunction_tr,
tr_duration=tr_duration,
mask=mask,
template=template,
noise_dict=orig_noise_dict,
)
# Standardize the signal activity to make it percent signal change
mean_act = (mask * orig_noise_dict['max_activity']).sum() / (mask > 0).sum()
signal = signal * mean_act / 100
# Combine the signal and the noise
brain = signal + noise
# Display the brain
fig = plt.figure()
for tr_counter in list(range(0, brain.shape[3])):
# Get the axis to be plotted
ax = sim.plot_brain(fig,
brain[:, :, :, tr_counter],
mask=mask,
percentile=99.9)
# Wait for an input
logging.info(tr_counter)
plt.pause(0.5)
# Save the volume
affine_matrix = np.diag([-1, 1, 1, 1]) # LR gets flipped
brain_nifti = nibabel.Nifti1Image(brain, affine_matrix) # Create a nifti brain
nibabel.save(brain_nifti, savename)
# Load in the test dataset and generate a random volume based on it
# Pull out the data and associated data
volume = nibabel.load(savename).get_data()
dimensions = volume.shape[0:3]
total_time = volume.shape[3] * tr_duration
stimfunction = sim.generate_stimfunction(onsets=[],
event_durations=[0],
total_time=total_time,
)
stimfunction_tr = stimfunction[::int(tr_duration * temporal_res)]
# Calculate the mask
mask, template = sim.mask_brain(volume=volume,
mask_self=True,
)
# Calculate the noise parameters
noise_dict = sim.calc_noise(volume=volume,
mask=mask,
)
# Create the noise volumes (using the default parameters
noise = sim.generate_noise(dimensions=dimensions,
tr_duration=tr_duration,
stimfunction_tr=stimfunction_tr,
template=template,
mask=mask,
noise_dict=noise_dict,
)
# Create a nifti brain
brain_noise = nibabel.Nifti1Image(noise, affine_matrix)
nibabel.save(brain_noise, 'examples/utils/example2.nii') # Save
|
Whats’ the deal with protein shakes?
Protein powders have been around for over 60 years now and people have been taking various forms of “recovery shakes” for even longer.
Arthur Saxon the famous strongman from the early 1900s who still holds a world record of 168kgs for the bent press had a ‘health drink’. It consisted of dark lager mixed with a whole egg and plenty of sugar. Possibly not the best workout shake, but it seemed to work for him.
So what is the low down on protein shakes, do you even need them? Most advertising would lead you to believe without it you’ll be left in everyone’s dust. The Truth is though, protein is protein.
The most important thing for anyone who wants to look better is simple. Have less body-fat and more muscle in the right places.
To build muscle requires protein from one’s diet. Protein also helps you lose fat by keeping you fuller for longer. If you don’t consume enough protein, your body will break down its own protein down (your muscle) and use it as energy and for recovery.
Basically, eat protein if you are trying to building muscle or lose fat. In-fact especially when you trying to lose fat as a higher protein diet keeps you fuller.
The optimum amount of protein required for building muscle is about 0.8 grams per pound of bodyweight. So if you weigh 200lbs you need around 160 grams of protein a day. Some people recommend even higher amounts, but the research doesn’t support anything above 0.8 grams per pound of bodyweight.
Increasing protein intake without doing anything else also leads to a reduction in total calorie consumption and weight loss.
As long as this protein requirement is met, you’ll build muscle or maintain it while losing fat. Depending on your training and if you are in a calorie surplus or calorie deficit.
The key is meeting this requirement. Whether you use protein shakes or eat lots of high protein foods like meat, fish, eggs or dairy it doesn’t really matter. The end result will be the same.
Your body doesn’t utilise the protein from a protein powder any different to that of a chicken breast. Eventually, the body will break down all the protein you consume into amino acids (the building blocks of protein).
As long as you eat a complete protein source. This is one that has all the amino essential amino acids needed by the body, you’ll end up with the same result. All animal and dairy proteins are complete, most plant sources are not.
What does this mean for protein shakes? It simply means you should see protein shakes more like a chicken breast. Simply a source of protein, the only real difference between protein shakes and more traditional foods is convenience. E.g. it’s a lot easier to gulp down a protein shake when you’re busy than cook something.
If preparing and eating enough high protein foods is something you struggle with. Using protein shakes as a way to meet your daily requirement can be a fantastic option.
Protein shakes can also be very versatile and used to boost the protein content of foods like yoghurts and smoothies.
That’s all protein shakes are, nothing more, nothing less. That said if you think using a protein shake to help you hit your daily intake, I suggest going for a whey-based formula as it mixes really well into smoothies and yoghurts and unlike most vegetable protein powders is a complete protein.
If you are vegan I’d get a mixture of rice, pea and hemp as although all these foods are missing some of the essential amino acids our body needs added together, they form a complete protein source.
So do you take protein shakes and if so have they made a difference?
|
import re
from livestreamer.plugin import Plugin, PluginError
from livestreamer.plugin.api import http
from livestreamer.stream import HLSStream
# The last four channel_paths repsond with 301 and provide
# a redirect location that corresponds to a channel_path above.
_url_re = re.compile(r"""
https?://www\.rtve\.es/
(?P<channel_path>
directo/la-1|
directo/la-2|
directo/teledeporte|
directo/canal-24h|
noticias/directo-la-1|
television/la-2-directo|
deportes/directo/teledeporte|
noticias/directo/canal-24h
)
/?
""", re.VERBOSE)
_id_map = {
"directo/la-1": "LA1",
"directo/la-2": "LA2",
"directo/teledeporte": "TDP",
"directo/canal-24h": "24H",
"noticias/directo-la-1": "LA1",
"television/la-2-directo": "LA2",
"deportes/directo/teledeporte": "TDP",
"noticias/directo/canal-24h": "24H",
}
class Rtve(Plugin):
@classmethod
def can_handle_url(cls, url):
return _url_re.match(url)
def __init__(self, url):
Plugin.__init__(self, url)
match = _url_re.match(url).groupdict()
self.channel_path = match["channel_path"]
def _get_streams(self):
stream_id = _id_map[self.channel_path]
hls_url = "http://iphonelive.rtve.es/{0}_LV3_IPH/{0}_LV3_IPH.m3u8".format(stream_id)
# Check if the stream is available
res = http.head(hls_url, raise_for_status=False)
if res.status_code == 404:
raise PluginError("The program is not available due to rights restrictions")
return HLSStream.parse_variant_playlist(self.session, hls_url)
__plugin__ = Rtve
|
Steve Becker always wanted to own his own farm. When the opportunity popped up to buy acreage near the intersection of Highway 22 and Sand Road, he took it.
The 54 acres he bought is now a 3-acre hops farm, with additional acres planted in alfalfa and fruit trees.
The alfalfa is marketable now, but it’s going to take a few years before the hops and fruit trees are producing in quantity. In other words, it’s a long-term investment.
Becker, the youngest of 12 children, grew up on a farm, land his family rented, near Ainsworth and farmed in partnership with his dad for seven years.
He moved to Iowa City in 1989, owned and managed several small businesses, including River City Auto Sales in Coralville. He is currently a Realtor with Lepic-Kroeger in Iowa City.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
"""
episode, season, episode_count, season_count and episode_details properties
"""
import copy
from collections import defaultdict
from rebulk import Rebulk, RemoveMatch, Rule, AppendMatch, RenameMatch
from rebulk.match import Match
from rebulk.remodule import re
from rebulk.utils import is_iterable
from .title import TitleFromPosition
from ..common import dash, alt_dash, seps
from ..common.formatters import strip
from ..common.numeral import numeral, parse_numeral
from ..common.validators import compose, seps_surround, seps_before, int_coercable
from ...reutils import build_or_pattern
def episodes():
"""
Builder for rebulk object.
:return: Created Rebulk object
:rtype: Rebulk
"""
# pylint: disable=too-many-branches,too-many-statements,too-many-locals
rebulk = Rebulk()
rebulk.regex_defaults(flags=re.IGNORECASE).string_defaults(ignore_case=True)
rebulk.defaults(private_names=['episodeSeparator', 'seasonSeparator'])
def season_episode_conflict_solver(match, other):
"""
Conflict solver for episode/season patterns
:param match:
:param other:
:return:
"""
if match.name in ['season', 'episode'] and other.name in ['screen_size', 'video_codec',
'audio_codec', 'audio_channels',
'container', 'date']:
return match
elif match.name in ['season', 'episode'] and other.name in ['season', 'episode'] \
and match.initiator != other.initiator:
if 'weak-episode' in match.tags:
return match
if 'weak-episode' in other.tags:
return other
if 'x' in match.initiator.raw.lower():
return match
if 'x' in other.initiator.raw.lower():
return other
return '__default__'
season_episode_seps = []
season_episode_seps.extend(seps)
season_episode_seps.extend(['x', 'X', 'e', 'E'])
season_words = ['season', 'saison', 'serie', 'seasons', 'saisons', 'series']
episode_words = ['episode', 'episodes', 'ep']
of_words = ['of', 'sur']
all_words = ['All']
season_markers = ["S"]
season_ep_markers = ["x"]
episode_markers = ["xE", "Ex", "EP", "E", "x"]
range_separators = ['-', '~', 'to', 'a']
weak_discrete_separators = list(sep for sep in seps if sep not in range_separators)
strong_discrete_separators = ['+', '&', 'and', 'et']
discrete_separators = strong_discrete_separators + weak_discrete_separators
def ordering_validator(match):
"""
Validator for season list. They should be in natural order to be validated.
episode/season separated by a weak discrete separator should be consecutive, unless a strong discrete separator
or a range separator is present in the chain (1.3&5 is valid, but 1.3-5 is not valid and 1.3.5 is not valid)
"""
values = match.children.to_dict(implicit=True)
if 'season' in values and is_iterable(values['season']):
# Season numbers must be in natural order to be validated.
if not list(sorted(values['season'])) == values['season']:
return False
if 'episode' in values and is_iterable(values['episode']):
# Season numbers must be in natural order to be validated.
if not list(sorted(values['episode'])) == values['episode']:
return False
def is_consecutive(property_name):
"""
Check if the property season or episode has valid consecutive values.
:param property_name:
:type property_name:
:return:
:rtype:
"""
previous_match = None
valid = True
for current_match in match.children.named(property_name):
if previous_match:
match.children.previous(current_match,
lambda m: m.name == property_name + 'Separator')
separator = match.children.previous(current_match,
lambda m: m.name == property_name + 'Separator', 0)
if separator.raw not in range_separators and separator.raw in weak_discrete_separators:
if not current_match.value - previous_match.value == 1:
valid = False
if separator.raw in strong_discrete_separators:
valid = True
break
previous_match = current_match
return valid
return is_consecutive('episode') and is_consecutive('season')
# S01E02, 01x02, S01S02S03
rebulk.chain(formatter={'season': int, 'episode': int},
tags=['SxxExx'],
abbreviations=[alt_dash],
children=True,
private_parent=True,
validate_all=True,
validator={'__parent__': ordering_validator},
conflict_solver=season_episode_conflict_solver) \
.regex(build_or_pattern(season_markers) + r'(?P<season>\d+)@?' +
build_or_pattern(episode_markers) + r'@?(?P<episode>\d+)',
validate_all=True,
validator={'__parent__': seps_before}).repeater('+') \
.regex(build_or_pattern(episode_markers + discrete_separators + range_separators,
name='episodeSeparator',
escape=True) +
r'(?P<episode>\d+)').repeater('*') \
.chain() \
.regex(r'(?P<season>\d+)@?' +
build_or_pattern(season_ep_markers) +
r'@?(?P<episode>\d+)',
validate_all=True,
validator={'__parent__': seps_before}) \
.chain() \
.regex(r'(?P<season>\d+)@?' +
build_or_pattern(season_ep_markers) +
r'@?(?P<episode>\d+)',
validate_all=True,
validator={'__parent__': seps_before}) \
.regex(build_or_pattern(season_ep_markers + discrete_separators + range_separators,
name='episodeSeparator',
escape=True) +
r'(?P<episode>\d+)').repeater('*') \
.chain() \
.regex(build_or_pattern(season_markers) + r'(?P<season>\d+)',
validate_all=True,
validator={'__parent__': seps_before}) \
.regex(build_or_pattern(season_markers + discrete_separators + range_separators,
name='seasonSeparator',
escape=True) +
r'(?P<season>\d+)').repeater('*')
# episode_details property
for episode_detail in ('Special', 'Bonus', 'Omake', 'Ova', 'Oav', 'Pilot', 'Unaired'):
rebulk.string(episode_detail, value=episode_detail, name='episode_details')
rebulk.regex(r'Extras?', name='episode_details', value='Extras')
rebulk.defaults(private_names=['episodeSeparator', 'seasonSeparator'],
validate_all=True, validator={'__parent__': seps_surround}, children=True, private_parent=True)
def validate_roman(match):
"""
Validate a roman match if surrounded by separators
:param match:
:type match:
:return:
:rtype:
"""
if int_coercable(match.raw):
return True
return seps_surround(match)
rebulk.chain(abbreviations=[alt_dash],
formatter={'season': parse_numeral, 'count': parse_numeral},
validator={'__parent__': compose(seps_surround, ordering_validator),
'season': validate_roman,
'count': validate_roman}) \
.defaults(validator=None) \
.regex(build_or_pattern(season_words) + '@?(?P<season>' + numeral + ')') \
.regex(r'' + build_or_pattern(of_words) + '@?(?P<count>' + numeral + ')').repeater('?') \
.regex(r'@?(?P<seasonSeparator>' +
build_or_pattern(range_separators + discrete_separators + ['@'], escape=True) +
r')@?(?P<season>\d+)').repeater('*')
rebulk.regex(build_or_pattern(episode_words) + r'-?(?P<episode>\d+)' +
r'(?:v(?P<version>\d+))?' +
r'(?:-?' + build_or_pattern(of_words) + r'-?(?P<count>\d+))?', # Episode 4
abbreviations=[dash], formatter=int,
disabled=lambda context: context.get('type') == 'episode')
rebulk.regex(build_or_pattern(episode_words) + r'-?(?P<episode>' + numeral + ')' +
r'(?:v(?P<version>\d+))?' +
r'(?:-?' + build_or_pattern(of_words) + r'-?(?P<count>\d+))?', # Episode 4
abbreviations=[dash],
validator={'episode': validate_roman},
formatter={'episode': parse_numeral, 'version': int, 'count': int},
disabled=lambda context: context.get('type') != 'episode')
rebulk.regex(r'S?(?P<season>\d+)-?(?:xE|Ex|E|x)-?(?P<other>' + build_or_pattern(all_words) + ')',
tags=['SxxExx'],
abbreviations=[dash],
validator=None,
formatter={'season': int, 'other': lambda match: 'Complete'})
rebulk.defaults(private_names=['episodeSeparator', 'seasonSeparator'], validate_all=True,
validator={'__parent__': seps_surround}, children=True, private_parent=True)
# 12, 13
rebulk.chain(tags=['bonus-conflict', 'weak-movie', 'weak-episode'], formatter={'episode': int, 'version': int}) \
.defaults(validator=None) \
.regex(r'(?P<episode>\d{2})') \
.regex(r'v(?P<version>\d+)').repeater('?') \
.regex(r'(?P<episodeSeparator>[x-])(?P<episode>\d{2})').repeater('*')
# 012, 013
rebulk.chain(tags=['bonus-conflict', 'weak-movie', 'weak-episode'], formatter={'episode': int, 'version': int}) \
.defaults(validator=None) \
.regex(r'0(?P<episode>\d{1,2})') \
.regex(r'v(?P<version>\d+)').repeater('?') \
.regex(r'(?P<episodeSeparator>[x-])0(?P<episode>\d{1,2})').repeater('*')
# 112, 113
rebulk.chain(tags=['bonus-conflict', 'weak-movie', 'weak-episode'], formatter={'episode': int, 'version': int},
disabled=lambda context: not context.get('episode_prefer_number', False)) \
.defaults(validator=None) \
.regex(r'(?P<episode>\d{3,4})') \
.regex(r'v(?P<version>\d+)').repeater('?') \
.regex(r'(?P<episodeSeparator>[x-])(?P<episode>\d{3,4})').repeater('*')
# 1, 2, 3
rebulk.chain(tags=['bonus-conflict', 'weak-movie', 'weak-episode'], formatter={'episode': int, 'version': int},
disabled=lambda context: context.get('type') != 'episode') \
.defaults(validator=None) \
.regex(r'(?P<episode>\d)') \
.regex(r'v(?P<version>\d+)').repeater('?') \
.regex(r'(?P<episodeSeparator>[x-])(?P<episode>\d{1,2})').repeater('*')
# e112, e113
# TODO: Enhance rebulk for validator to be used globally (season_episode_validator)
rebulk.chain(formatter={'episode': int, 'version': int}) \
.defaults(validator=None) \
.regex(r'e(?P<episode>\d{1,4})') \
.regex(r'v(?P<version>\d+)').repeater('?') \
.regex(r'(?P<episodeSeparator>e|x|-)(?P<episode>\d{1,4})').repeater('*')
# ep 112, ep113, ep112, ep113
rebulk.chain(abbreviations=[dash], formatter={'episode': int, 'version': int}) \
.defaults(validator=None) \
.regex(r'ep-?(?P<episode>\d{1,4})') \
.regex(r'v(?P<version>\d+)').repeater('?') \
.regex(r'(?P<episodeSeparator>ep|e|x|-)(?P<episode>\d{1,4})').repeater('*')
# 102, 0102
rebulk.chain(tags=['bonus-conflict', 'weak-movie', 'weak-episode', 'weak-duplicate'],
formatter={'season': int, 'episode': int, 'version': int},
conflict_solver=lambda match, other: match if other.name == 'year' else '__default__',
disabled=lambda context: context.get('episode_prefer_number', False)) \
.defaults(validator=None) \
.regex(r'(?P<season>\d{1,2})(?P<episode>\d{2})') \
.regex(r'v(?P<version>\d+)').repeater('?') \
.regex(r'(?P<episodeSeparator>x|-)(?P<episode>\d{2})').repeater('*')
rebulk.regex(r'v(?P<version>\d+)', children=True, private_parent=True, formatter=int)
rebulk.defaults(private_names=['episodeSeparator', 'seasonSeparator'])
# TODO: List of words
# detached of X count (season/episode)
rebulk.regex(r'(?P<episode>\d+)?-?' + build_or_pattern(of_words) +
r'-?(?P<count>\d+)-?' + build_or_pattern(episode_words) + '?',
abbreviations=[dash], children=True, private_parent=True, formatter=int)
rebulk.regex(r'Minisodes?', name='episode_format', value="Minisode")
# Harcoded movie to disable weak season/episodes
rebulk.regex('OSS-?117',
abbreviations=[dash], name="hardcoded-movies", marker=True,
conflict_solver=lambda match, other: None)
rebulk.rules(EpisodeNumberSeparatorRange(range_separators),
SeasonSeparatorRange(range_separators), RemoveWeakIfMovie, RemoveWeakIfSxxExx,
RemoveWeakDuplicate, EpisodeDetailValidator, RemoveDetachedEpisodeNumber, VersionValidator,
CountValidator, EpisodeSingleDigitValidator)
return rebulk
class CountValidator(Rule):
"""
Validate count property and rename it
"""
priority = 64
consequence = [RemoveMatch, RenameMatch('episode_count'), RenameMatch('season_count')]
properties = {'episode_count': [None], 'season_count': [None]}
def when(self, matches, context):
to_remove = []
episode_count = []
season_count = []
for count in matches.named('count'):
previous = matches.previous(count, lambda match: match.name in ['episode', 'season'], 0)
if previous:
if previous.name == 'episode':
episode_count.append(count)
elif previous.name == 'season':
season_count.append(count)
else:
to_remove.append(count)
return to_remove, episode_count, season_count
class AbstractSeparatorRange(Rule):
"""
Remove separator matches and create matches for season range.
"""
priority = 128
consequence = [RemoveMatch, AppendMatch]
def __init__(self, range_separators, property_name):
super(AbstractSeparatorRange, self).__init__()
self.range_separators = range_separators
self.property_name = property_name
def when(self, matches, context):
to_remove = []
to_append = []
for separator in matches.named(self.property_name + 'Separator'):
previous_match = matches.previous(separator, lambda match: match.name == self.property_name, 0)
next_match = matches.next(separator, lambda match: match.name == self.property_name, 0)
if previous_match and next_match and separator.value in self.range_separators:
for episode_number in range(previous_match.value + 1, next_match.value):
match = copy.copy(next_match)
match.value = episode_number
to_append.append(match)
to_remove.append(separator)
previous_match = None
for next_match in matches.named(self.property_name):
if previous_match:
separator = matches.input_string[previous_match.initiator.end:next_match.initiator.start]
if separator not in self.range_separators:
separator = strip(separator)
if separator in self.range_separators:
for episode_number in range(previous_match.value + 1, next_match.value):
match = copy.copy(next_match)
match.value = episode_number
to_append.append(match)
to_append.append(Match(previous_match.end, next_match.start - 1,
name=self.property_name + 'Separator',
private=True,
input_string=matches.input_string))
to_remove.append(next_match) # Remove and append match to support proper ordering
to_append.append(next_match)
previous_match = next_match
return to_remove, to_append
class EpisodeNumberSeparatorRange(AbstractSeparatorRange):
"""
Remove separator matches and create matches for episoderNumber range.
"""
priority = 128
consequence = [RemoveMatch, AppendMatch]
def __init__(self, range_separators):
super(EpisodeNumberSeparatorRange, self).__init__(range_separators, "episode")
class SeasonSeparatorRange(AbstractSeparatorRange):
"""
Remove separator matches and create matches for season range.
"""
priority = 128
consequence = [RemoveMatch, AppendMatch]
def __init__(self, range_separators):
super(SeasonSeparatorRange, self).__init__(range_separators, "season")
class RemoveWeakIfMovie(Rule):
"""
Remove weak-movie tagged matches if it seems to be a movie.
"""
priority = 64
consequence = RemoveMatch
def when(self, matches, context):
if matches.named('year') or matches.markers.named('hardcoded-movies'):
return matches.tagged('weak-movie')
class RemoveWeakIfSxxExx(Rule):
"""
Remove weak-movie tagged matches if SxxExx pattern is matched.
"""
priority = 64
consequence = RemoveMatch
def when(self, matches, context):
if matches.tagged('SxxExx', lambda match: not match.private):
return matches.tagged('weak-movie')
class RemoveWeakDuplicate(Rule):
"""
Remove weak-duplicate tagged matches if duplicate patterns, for example The 100.109
"""
priority = 64
consequence = RemoveMatch
def when(self, matches, context):
to_remove = []
for filepart in matches.markers.named('path'):
patterns = defaultdict(list)
for match in reversed(matches.range(filepart.start, filepart.end,
predicate=lambda match: 'weak-duplicate' in match.tags)):
if match.pattern in patterns[match.name]:
to_remove.append(match)
else:
patterns[match.name].append(match.pattern)
return to_remove
class EpisodeDetailValidator(Rule):
"""
Validate episode_details if they are detached or next to season or episode.
"""
priority = 64
consequence = RemoveMatch
def when(self, matches, context):
ret = []
for detail in matches.named('episode_details'):
if not seps_surround(detail) \
and not matches.previous(detail, lambda match: match.name in ['season', 'episode']) \
and not matches.next(detail, lambda match: match.name in ['season', 'episode']):
ret.append(detail)
return ret
class RemoveDetachedEpisodeNumber(Rule):
"""
If multiple episode are found, remove those that are not detached from a range and less than 10.
Fairy Tail 2 - 16-20, 2 should be removed.
"""
priority = 64
consequence = RemoveMatch
dependency = [RemoveWeakIfSxxExx, RemoveWeakDuplicate]
def when(self, matches, context):
ret = []
episode_numbers = []
episode_values = set()
for match in matches.named('episode', lambda match: not match.private and 'weak-movie' in match.tags):
if match.value not in episode_values:
episode_numbers.append(match)
episode_values.add(match.value)
episode_numbers = list(sorted(episode_numbers, key=lambda match: match.value))
if len(episode_numbers) > 1 and \
episode_numbers[0].value < 10 and \
episode_numbers[1].value - episode_numbers[0].value != 1:
parent = episode_numbers[0]
while parent: # TODO: Add a feature in rebulk to avoid this ...
ret.append(parent)
parent = parent.parent
return ret
class VersionValidator(Rule):
"""
Validate version if previous match is episode or if surrounded by separators.
"""
priority = 64
dependency = [RemoveWeakIfMovie, RemoveWeakIfSxxExx]
consequence = RemoveMatch
def when(self, matches, context):
ret = []
for version in matches.named('version'):
episode_number = matches.previous(version, lambda match: match.name == 'episode', 0)
if not episode_number and not seps_surround(version.initiator):
ret.append(version)
return ret
class EpisodeSingleDigitValidator(Rule):
"""
Remove single digit episode when inside a group that doesn't own title.
"""
dependency = [TitleFromPosition]
consequence = RemoveMatch
def when(self, matches, context):
ret = []
for episode in matches.named('episode', lambda match: len(match.initiator) == 1):
group = matches.markers.at_match(episode, lambda marker: marker.name == 'group', index=0)
if group:
if not matches.range(*group.span, predicate=lambda match: match.name == 'title'):
ret.append(episode)
return ret
|
For children who believe their distress is invisible, self-harm is a way of creating physical proof of their emotional pain and of maintaining hope that it will be noticed and addressed. For others, the physical pain from self-harm can be a way to distract themselves from the emotional pain they feel. Many of these children may not have learned healthy ways of identifying and expressing overwhelming negative emotions, such as intense anger, tension, or frustration; they find temporary relief from these distressing feelings through harming themselves.
Other mental health difficulties often occur alongside SIB, and a successful approach to freeing a student from it needs to address these difficulties as well, improve coping skills and practices, and resolve the underlying distress.
When students feel that the school environment adds to their feelings of distress and isolation or are embarrassed about their self-injurious behaviour, they may avoid attending school.
Provide appreciative support for children when they are not engaged in self-injurious behaviour to build trust and help them believe that they can anticipate and rely on support.
Let children know that they are by no means alone.
Provide a lot of support and encouragement as children develop new coping strategies.
When homework or school work is clearly overwhelming, make temporary adjustments.
Recognize that any one teacher or support staff member cannot meet the needs of a student with SIB but consistently respond with empathy and validate the student’s need for care and connection.
Approach the principal with your awareness of the student’s needs.
Students who self-harm are more sensitive to criticism and require praise and support.
Share the task with other staff of giving the time and support needed to break down the student’s feelings of isolation.
Build partnerships and follow protocol with community agencies that can assist in supporting students.
Support and facilitate student engagement in vital areas of school life such as drama, sports, clubs, etc.
Encourage the student to seek medical attention if wounds are infected.
Help facilitate connections to community mental health supports.
Seek information to help build an understanding of self-injurious behaviour in general.
Consider how to provide some extra care and attention for the child.
Consider how the family models the management of distressing emotions and work to build effective communication and acceptance.
Support referrals to child mental health services and work closely with the treatment team.
Provide consultation and support to schools and families about how to practise effective coping skills with the child so that he/she can function successfully at school and in the community.
Assess the need for and provide treatment of underlying distress.
Facilitate referrals for specialized consultations and treatment as needed.
Help the family assess its ability to model effective communication and support for one another.
Help the child express and identify feelings and at the same time develop healthy coping skills.
|
#!/usr/bin/env python
import web
import json
urls = (
'/checkIngredients', 'checkIngredients',
'/markOut/(.*)', 'markOut',
'/markIn/(.*)', 'markIn'
)
jsonString = '''[
{
"type": "liquor",
"id" : "brandy",
"name": "Brandy",
"inStock": true
},
{
"type": "liquor",
"id" : "bourbon",
"name": "Bourbon",
"inStock": true
},
{
"type": "liquor",
"id" : "dark-rum",
"name": "Dark Rum",
"inStock": false
},
{
"type": "liquor",
"id" : "gin",
"name": "Gin",
"inStock": false
},
{
"type": "liquor",
"id" : "light-rum",
"name": "Light Rum",
"inStock": false
},
{
"type": "liquor",
"id" : "tequila",
"name": "Tequila",
"inStock": true
},
{
"type": "liquor",
"id" : "vodka",
"name": "Vodka",
"inStock": true
},
{
"type": "liquor",
"id" : "whiskey",
"name": "Whiskey",
"inStock": true
},
{
"type": "mixer",
"id" : "amaretto",
"name": "Amaretto",
"inStock": true
},
{
"type": "mixer",
"id" : "bitters",
"name": "Bitters",
"inStock": true
},
{
"type": "mixer",
"id" : "blue",
"name": "Blue Curacao",
"inStock": true
},
{
"type": "mixer",
"id" : "champagne",
"name": "Champagne",
"inStock": true
},
{
"type": "mixer",
"id" : "club-soda",
"name": "Club Soda",
"inStock": true
},
{
"type": "mixer",
"id" : "creme-de-cacao",
"name": "Creme de Cacao",
"inStock": true
},
{
"type": "mixer",
"id" : "creme-de-menth",
"name": "Creme de Menthe",
"inStock": true
},
{
"type": "mixer",
"id" : "dry-vermouth",
"name": "Dry Vermouth",
"inStock": true
},
{
"type": "mixer",
"id" : "grenadine",
"name": "Grenadine",
"inStock": true
},
{
"type": "mixer",
"id" : "ginger-ale",
"name": "Ginger Ale",
"inStock": true
},
{
"type": "mixer",
"id" : "ginger-beer",
"name": "Ginger Beer",
"inStock": true
},
{
"type": "mixer",
"id" : "irish-cream",
"name": "Irish Cream",
"inStock": true
},
{
"type": "mixer",
"id" : "apple-juice",
"name": "Juice - Apple",
"inStock": true
},
{
"type": "mixer",
"id" : "cranberry-juice",
"name": "Juice - Cranberry",
"inStock": true
},
{
"type": "mixer",
"id" : "grapefruit-juice",
"name": "Juice - Grapefruit",
"inStock": true
},
{
"type": "mixer",
"id" : "lemon-juice",
"name": "Juice - Lemon",
"inStock": true
},
{
"type": "mixer",
"id" : "lime-juice",
"name": "Juice - Lime",
"inStock": true
},
{
"type": "mixer",
"id" : "mango-juice",
"name": "Juice - Mango",
"inStock": true
},
{
"type": "mixer",
"id" : "orange-juice",
"name": "Juice - Orange",
"inStock": true
},
{
"type": "mixer",
"id" : "peach-juice",
"name": "Juice - Peach",
"inStock": true
},
{
"type": "mixer",
"id" : "pineapple-juice",
"name": "Juice - Pineapple",
"inStock": true
},
{
"type": "mixer",
"id" : "kahlua",
"name": "Kahlua",
"inStock": true
},
{
"type": "mixer",
"id" : "melon-liqueur",
"name": "Melon Liqueur",
"inStock": true
},
{
"type": "mixer",
"id" : "orange-schnapps",
"name": "Orange Schnapps",
"inStock": true
},
{
"type": "mixer",
"id" : "peach-schnapps",
"name": "Peach Schnapps",
"inStock": true
},
{
"type": "mixer",
"id" : "simple-syrup",
"name": "Simple Syrup",
"inStock": true
},
{
"type": "mixer",
"id" : "cola",
"name": "Soda - Cola",
"inStock": true
},
{
"type": "mixer",
"id" : "sprite",
"name": "Soda - Sprite",
"inStock": true
},
{
"type": "mixer",
"id" : "sour-mix",
"name": "Sour Mix",
"inStock": true
},
{
"type": "mixer",
"id" : "so-co",
"name": "Southern Comfort",
"inStock": true
},
{
"type": "mixer",
"id" : "sweet-lime",
"name": "Sweet Lime",
"inStock": true
},
{
"type": "mixer",
"id" : "sweet-sour-mix",
"name": "Sweet & Sour Mix",
"inStock": true
},
{
"type": "mixer",
"id" : "sweet-vermouth",
"name": "Sweet Vermouth",
"inStock": true
},
{
"type": "mixer",
"id" : "triple-sec",
"name": "Triple Sec",
"inStock": true
},
{
"type": "mixer",
"id" : "wine",
"name": "Wine",
"inStock": true
},
{
"type": "other",
"id" : "cinnamon",
"name": "Cinnamon",
"inStock": true
},
{
"type": "other",
"id" : "cream",
"name": "Cream",
"inStock": true
},
{
"type": "other",
"id" : "lime",
"name": "Lime",
"inStock": true
},
{
"type": "other",
"id" : "lemon",
"name": "Lemon",
"inStock": true
},
{
"type": "other",
"id" : "cherry",
"name": "Maraschino Cherry",
"inStock": true
},
{
"type": "other",
"id" : "milk",
"name": "Milk",
"inStock": true
},
{
"type": "other",
"id" : "mint",
"name": "Mint",
"inStock": true
},
{
"type": "other",
"id" : "strawberry",
"name": "Strawberry",
"inStock": true
},
{
"type": "other",
"id" : "sugar",
"name": "Sugar",
"inStock": true
}
]'''
ingredients = json.loads(jsonString)
class checkIngredients:
def GET(self):
web.header('Access-Control-Allow-Origin', '*')
web.header('Access-Control-Allow-Credentials', 'true')
return json.dumps(ingredients)
class markOut:
def GET(self, drink):
web.header('Access-Control-Allow-Origin', '*')
web.header('Access-Control-Allow-Credentials', 'true')
for ing in ingredients:
if drink == ing['id']:
ing['inStock'] = False
return 'Marked ' + str(drink) + ' as out of stock'
return 'Couldn\'t find ' + str(drink)
class markIn:
def GET(self, drink):
web.header('Access-Control-Allow-Origin', '*')
web.header('Access-Control-Allow-Credentials', 'true')
for ing in ingredients:
if drink == ing['id']:
ing['inStock'] = True
return 'Marked ' + str(drink) + ' as in stock'
return 'Couldn\'t find ' + str(drink)
class MyApplication(web.application):
def run(self, port=8080, *middleware):
func = self.wsgifunc(*middleware)
return web.httpserver.runsimple(func, ('0.0.0.0', port))
if __name__ == "__main__":
app = MyApplication(urls, globals())
app.run(port=8888)
|
Life is not always a cup of tea, but a break to enjoy that "cup of tea"
and note card by Steve Henderson.
One of the most satisfying things I do is teaching another person how to knit. And every time I do so, I conclude the lesson with this encouragement: "You've just learned. While knitting is fairly simple consisting of basically two stitches, until you practice and do it over and over and over, you will not get good.
"And in the process of practicing, and learning how to be good, you will find that you forget some things, or that your knitting looks uneven, or that you drop stitches and you don't know how to get back. And you will get frustrated.
Not only is this normal, but this is good, and if you're not getting frustrated, then you're probably not pushing yourself beyond your existing skill. You are not stupid. You are not unusual. You are not weird. You are normal. You are above normal when you accept the challenge, fight it, and win. Now, go and knit."
This same advice applies in anything you do, including and especially including creating a fine art painting or sculpture. You won't get better if you don't practice those oil painting techniques and push yourself with new painting instruction; and if you do it a lot and push yourself outside your existing painting skills, you can expect to get frustrated because you are getting somewhere.
Thank you Carolyn, minister of art wisdom.
I recently attended a plein air event in which I did a highly focused, boring, technical painting over a three hour quick draw. Half way during the event, two guys plopped their easels near me and had a blast painting away. One guy partied drinking beer afterwards maybe the other guy did too. I, being seriously dehydrated and sore, drank a quart or so of water.
One guy won an award and was purchased; his painting blew mine out of the water. The other guy did not win the event, but won best of show for the entire event.
It was a bit frustrating creating something I would personally never purchase nor did I like the painting despite the many complements. OK, I’m technically skilled – so what?! The work is creatively challenged and simply an accurate depiction of the subject matter.The work said nothing creatively nor aesthetically. I am very grateful for those two guys plopping their easels down and creating masterworks in half the time. They showed me that there is a better way to approach my art and I will certain explore a new avenue from this point forward.
I believe it is good to practice a tough self criticism, seeing what others do can be an inspiration and a source of energy to improve our work.
Anyway we need to find a balance to enjoy what we do and look for more challenges. That balance is not easy at all, being “frustrated” constantly is true will push ourselves to improve but definitely will prevent from enjoying what we do.
|
import gtk
class SettingsWidget(gtk.Notebook):
def __init__(self, app):
gtk.Notebook.__init__(self)
self.app = app
self.set_tab_pos(gtk.POS_LEFT)
self.append_page(self._generic_settings(), gtk.Label("Generic"))
self.append_page(self._completion_settings(), gtk.Label("Completion"))
self.show_all()
def _generic_settings(self):
def set(section, name, value):
self.app.settings.set(section, name, str(value))
self.app.save_settings()
def settings_button(section, name, descritpion):
button = gtk.CheckButton(descritpion)
button.set_active(self.app.settings.getboolean(section, name))
button.connect("toggled",
lambda w: set(section, name, w.get_active()))
vbox.pack_start(button, False, False)
vbox = gtk.VBox()
settings_button("main", "save-before-build", "Save project before build")
settings_button("main", "ptp-debug", "PTP debugging")
return vbox
def _completion_settings(self):
def set(section, name, value):
self.app.settings.set(section, name, str(value))
self.app.save_settings()
def add_check_button(section, name, description):
button = gtk.CheckButton(description)
button.set_active(self.app.settings.getboolean(section,name))
button.connect("toggled",
lambda w: set(section, name, button.get_active()))
vbox.pack_start(button, False, False)
def add_spin_box(section,name,description, numeric = False, digits = 1, range = (1, 12)):
hbox = gtk.HBox()
spin = gtk.SpinButton()
spin.set_digits(digits)
spin.set_increments(1,2)
spin.set_range(range[0], range[1])
spin.set_numeric(numeric)
spin.set_value(self.app.settings.getfloat("code_completion", name))
spin.connect("value-changed", lambda w: set(section, name, str(spin.get_value())))
hbox.pack_start(gtk.Label(description), False, False)
hbox.pack_start(spin, False, False)
vbox.pack_start(hbox, False, False)
vbox = gtk.VBox()
add_check_button("code_completion","enable_highlight_current_line","Highlight current line")
add_check_button("code_completion", "enable_show_line_numbers", "Show line numbers")
add_spin_box("code_completion", "tab_width", "Tab size", numeric = True)
add_check_button("code_completion","enable_info_box","Show info box")
add_spin_box("code_completion", "delay_info_box", "Delay for info box in ms",
numeric = True,
digits = 0,
range = (0, 3000))
return vbox
|
Passenger Cars transport people from place to place. Passenger cars are also known as couch, carriage, or bogie cars. Passenger Cars can also be specialized intoa car which a specific purpose. Sleeping Cars, Baggage Cars, Dining Cars, and Railway post office cars are different types of specialty passenger cars. Transporting people was one of the first uses of railways. Passenger Trains could carry more people at one time then the other overland travel option which was horse carriage.
Passenger cars have evolved greatly over the years. The first passenger cars required passengers to stand for the entire journey. When the passenger cars began appearing in the United States they often resembled stagecoaches. Most early innovations in passenger car design occurred in the United Kingdom. By the 1840’s, the first sleeper cars were traveling the railways of England. Later in the 1840’s, the British Royal mail had created traveling post offices. These cars helped in the efficient delivery of mail from place to place. A good example of the American traveling post office is on display at Cody Park in North Platte Nebraska.
Dining cars became popular as train journeys became longer as railways began expanding across the large land continents. Another change seen around that same time was the dining car. The appearance of the vestibule between cars allowed passengers to safely and easily travel in between passenger cars while the train was in motion. The observation car has always been popular with passengers, as its large widows provide a landscape view of the surroundings. It also operates as the last car on the train with widows on the rear which provide additional viewing.
Nebraska’s largest Train and Toy Soldiers store, Trains and Toy Soldiers has grown out of our life long love of model railroading. Please call us at 1-800-786-1888, if we can help you find the electric model train or toy soldier you are looking for.
|
# -*- encoding: utf-8 -*-
"""
Usage::
hammer auth [OPTIONS] SUBCOMMAND [ARG] ...
Parameters::
SUBCOMMAND subcommand
[ARG] ... subcommand arguments
Subcommands::
login Set credentials
logout Wipe your credentials
status Information about current connections
"""
from robottelo.cli.base import Base
class Auth(Base):
""" Authenticates Foreman users """
command_base = 'auth'
@classmethod
def login(cls, options=None):
"""Set credentials"""
cls.command_sub = 'login'
return cls.execute(
cls._construct_command(options), output_format='csv')
@classmethod
def logout(cls, options=None):
"""Wipe credentials"""
cls.command_sub = 'logout'
return cls.execute(
cls._construct_command(options), output_format='csv')
@classmethod
def status(cls, options=None):
"""Show login status"""
cls.command_sub = 'status'
return cls.execute(
cls._construct_command(options), output_format='csv')
|
We kindly ask all partner institutions to confirm to us by email (iro@pwsz.krosno.pl) their ERASMUS+ nominations/selection of outgoing students. Please, send an email with the data included in the attached NOMINATIONS Excel sheet by mid-May, 2019.
Please note that by the DEADLINE of mid-May, 2019 we do not need the students’ Application forms, just the EXCEL DATA SHEET duly completed.
They also ought to attach passport or identity card copy (page with photo only).
Application forms and learning agreements must be filled in legibly and must have all necessary signatures and stamps. Incomplete and/or careless applications will not be proceeded.
Please bear in mind that you can change your subjects when you are in Krosno.
When the documents are correct and the nominated student has been accepted to stay in Krosno, the Letter of Acceptance and the Learning Agreement will be sent to student’s Home Institution.
At the beginning of July students will start to receive to their e-mail addresses information concerning transport schedule, housing and other.
ON A REGULAR BASIS AND THEY ARE NOT FROM HOTMAIL.COM SINCE WE EXPERIENCE PROBLEMS WITH THAT DOMAIN.
All documents will be mailed to Erasmus+ coordinators of Home Institutions.
Please bring the original documents with you to Krosno.
You will find the intructions how to proceed with the applications also in Spanish in one of the attachements.
|
#! /usr/bin/env python
#
# Copyright (c) 2008-2010 University of Utah and the Flux Group.
#
# {{{GENIPUBLIC-LICENSE
#
# GENI Public License
#
# Permission is hereby granted, free of charge, to any person obtaining
# a copy of this software and/or hardware specification (the "Work") to
# deal in the Work without restriction, including without limitation the
# rights to use, copy, modify, merge, publish, distribute, sublicense,
# and/or sell copies of the Work, and to permit persons to whom the Work
# is furnished to do so, subject to the following conditions:
#
# The above copyright notice and this permission notice shall be
# included in all copies or substantial portions of the Work.
#
# THE WORK IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
# OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
# HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
# WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE WORK OR THE USE OR OTHER DEALINGS
# IN THE WORK.
#
# }}}
#
#
#
import sys
import pwd
import getopt
import os
import re
import xmlrpclib
import urllib
from xml.sax.handler import ContentHandler
import xml.sax
import string
from M2Crypto import X509
ACCEPTSLICENAME=1
def Usage():
print "usage: " + sys.argv[ 0 ] + " [option...] \
[component-manager-1 component-manager-2]"
print """Options:
-c file, --credentials=file read self-credentials from file
[default: query from SA]
-d, --debug be verbose about XML methods invoked
-f file, --certificate=file read SSL certificate from file
[default: ~/.ssl/encrypted.pem]
-h, --help show options and usage
-n name, --slicename=name specify human-readable name of slice
[default: mytestslice]
-p file, --passphrase=file read passphrase from file
[default: ~/.ssl/password]
-r file, --read-commands=file specify additional configuration file
-s file, --slicecredentials=file read slice credentials from file
[default: query from SA]"""
execfile( "test-common.py" )
if len( args ) == 2:
managers = ( args[ 0 ], args[ 1 ] )
elif len( args ):
Usage()
sys.exit( 1 )
else:
managers = None
class findElement(ContentHandler):
name = None
value = None
string = None
attributes = None
data = None
def __init__(self, name, stuff):
self.name = name
xml.sax.parseString(stuff, self)
pass
def startElement(self, name, attrs):
if self.name == name:
self.data = []
self.attributes = attrs
elif self.data != None:
self.data.append("<" + name + ">")
pass
pass
def characters(self, content):
if self.data != None:
self.data.append(content)
pass
pass
def endElement(self, name):
if self.name == name:
self.value = string.join(self.data, "");
self.string = "<" + name + ">" + self.value + "</" + name + ">"
self.data = None;
elif self.data != None:
self.data.append("</" + name + ">")
pass
pass
pass
#
# Get a credential for myself, that allows me to do things at the SA.
#
mycredential = get_self_credential()
print "Got my SA credential"
#
# Lookup slice.
#
params = {}
params["credential"] = mycredential
params["type"] = "Slice"
params["hrn"] = SLICENAME
rval,response = do_method("sa", "Resolve", params)
if rval:
#
# Create a slice.
#
print "Creating new slice called " + SLICENAME
params = {}
params["credential"] = mycredential
params["type"] = "Slice"
params["hrn"] = SLICENAME
rval,response = do_method("sa", "Register", params)
if rval:
Fatal("Could not create new slice")
pass
myslice = response["value"]
print "New slice created"
pass
else:
#
# Get the slice credential.
#
print "Asking for slice credential for " + SLICENAME
myslice = response["value"]
myslice = get_slice_credential( myslice, mycredential )
print "Got the slice credential"
pass
#
# Ask the clearinghouse for a list of component managers.
#
params = {}
params["credential"] = mycredential
rval,response = do_method("ch", "ListComponents", params)
if rval:
Fatal("Could not get a list of components from the ClearingHouse")
pass
components = response["value"];
if managers:
def FindCM( name, cmlist ):
for cm in cmlist:
hrn = cm[ "hrn" ]
if hrn == name or hrn == name + ".cm":
return cm[ "url" ]
Fatal( "Could not find component manager " + name )
url1 = FindCM( managers[ 0 ], components )
url2 = FindCM( managers[ 1 ], components )
else:
url1 = components[0]["url"]
url2 = components[1]["url"]
#url1 = "https://boss.emulab.net/protogeni/stoller/xmlrpc/cm"
#url2 = "https://myboss.myelab.testbed.emulab.net/protogeni/xmlrpc/cm"
#
# Get a ticket for a node on a CM.
#
rspec1 = "<rspec xmlns=\"http://protogeni.net/resources/rspec/0.1\"> " +\
" <node virtual_id=\"geni1\" "+\
" virtualization_type=\"emulab-vnode\"> " +\
" </node>" +\
"</rspec>"
print "Asking for a ticket from CM1 ..."
params = {}
params["credential"] = myslice
params["rspec"] = rspec1
rval,response = do_method(None, "GetTicket", params, URI=url1)
if rval:
if response and response["value"]:
print >> sys.stderr, ""
print >> sys.stderr, str(response["value"])
print >> sys.stderr, ""
pass
Fatal("Could not get ticket")
pass
ticket1 = response["value"]
print "Got a ticket from CM1, asking for a ticket from CM2 ..."
#
# Get the uuid of the node assigned so we can specify it in the tunnel.
#
ticket_element = findElement("ticket", ticket1)
node_element = findElement("node", str(ticket_element.string))
node1_rspec = str(node_element.string);
#
# Get a ticket for a node on another CM.
#
rspec2 = "<rspec xmlns=\"http://protogeni.net/resources/rspec/0.1\"> " +\
" <node virtual_id=\"geni2\" "+\
" virtualization_type=\"emulab-vnode\"> " +\
" </node>" +\
"</rspec>"
params = {}
params["credential"] = myslice
params["rspec"] = rspec2
rval,response = do_method(None, "GetTicket", params, URI=url2)
if rval:
if response and response["value"]:
print >> sys.stderr, ""
print >> sys.stderr, str(response["value"])
print >> sys.stderr, ""
pass
Fatal("Could not get ticket")
pass
ticket2 = response["value"]
print "Got a ticket from CM2, redeeming ticket on CM1 ..."
#
# Get the uuid of the node assigned so we can specify it in the tunnel.
#
ticket_element = findElement("ticket", ticket2)
node_element = findElement("node", str(ticket_element.string))
node2_rspec = str(node_element.string);
#
# Create the slivers.
#
params = {}
params["credential"] = myslice
params["ticket"] = ticket1
rval,response = do_method(None, "RedeemTicket", params, url1)
if rval:
Fatal("Could not redeem ticket on CM1")
pass
sliver1,manifest1 = response["value"]
print "Created a sliver on CM1, redeeming ticket on CM2 ..."
print str(manifest1);
params = {}
params["credential"] = myslice
params["ticket"] = ticket2
rval,response = do_method(None, "RedeemTicket", params, url2)
if rval:
Fatal("Could not redeem ticket on CM2")
pass
sliver2,manifest2 = response["value"]
print "Created a sliver on CM2"
print str(manifest2)
#
# Now add the tunnel part since we have the uuids for the two nodes.
#
rspec = "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" +\
"<rspec xmlns=\"http://www.protogeni.net/resources/rspec/0.1\" " +\
" type=\"request\"> " + node1_rspec + " " + node2_rspec + " " +\
" <link virtual_id=\"link0\" link_type=\"tunnel\"> " +\
" <interface_ref virtual_node_id=\"geni1\" " +\
" virtual_interface_id=\"virt0\" "+\
" tunnel_ip=\"192.168.1.1\" />" +\
" <interface_ref virtual_node_id=\"geni2\" " +\
" virtual_interface_id=\"virt0\" "+\
" tunnel_ip=\"192.168.1.2\" />" +\
" </link> " +\
"</rspec>"
#print str(rspec)
print "Updating ticket on CM1 with tunnel stuff ..."
params = {}
params["credential"] = myslice
params["ticket"] = ticket1
params["rspec"] = rspec
rval,response = do_method(None, "UpdateTicket", params, url1)
if rval:
Fatal("Could not update ticket on CM1")
pass
ticket1 = response["value"]
print "Updated ticket on CM1. Updating ticket on CM2 with tunnel stuff ..."
#
# And again for the second ticket.
#
params = {}
params["credential"] = myslice
params["ticket"] = ticket2
params["rspec"] = rspec
rval,response = do_method(None, "UpdateTicket", params, url2)
if rval:
Fatal("Could not update ticket on CM2")
pass
ticket2 = response["value"]
print "Updated ticket on CM2. Updating sliver on CM1 with new ticket ..."
#
# Update the slivers with the new tickets, to create the tunnels
#
params = {}
params["credential"] = sliver1
params["ticket"] = ticket1
rval,response = do_method(None, "UpdateSliver", params, url1)
if rval:
Fatal("Could not update sliver on CM1")
pass
manifest1 = response["value"]
print "Updated sliver on CM1. Updating sliver on CM2 with new ticket ..."
#print str(manifest1);
params = {}
params["credential"] = sliver2
params["ticket"] = ticket2
rval,response = do_method(None, "UpdateSliver", params, url2)
if rval:
Fatal("Could not start sliver on CM2")
pass
manifest2 = response["value"]
print "Updated sliver on CM2. Starting sliver on CM1 ..."
#print str(manifest1);
#
# Start the slivers.
#
params = {}
params["credential"] = sliver1
rval,response = do_method(None, "StartSliver", params, url1)
if rval:
Fatal("Could not start sliver on CM1")
pass
print "Started sliver on CM1. Starting sliver on CM2 ..."
params = {}
params["credential"] = sliver2
rval,response = do_method(None, "StartSliver", params, url2)
if rval:
Fatal("Could not start sliver on CM2")
pass
print "Slivers have been started, waiting for input to delete it"
print "You should be able to log into the sliver after a little bit"
sys.stdin.readline();
#
# Delete the slivers.
#
print "Deleting sliver1 now"
params = {}
params["credential"] = sliver1
rval,response = do_method(None, "DeleteSliver", params, url1)
if rval:
Fatal("Could not delete sliver on CM1")
pass
print "Sliver1 has been deleted"
print "Deleting sliver2 now"
params = {}
params["credential"] = sliver2
rval,response = do_method(None, "DeleteSliver", params, url2)
if rval:
Fatal("Could not delete sliver on CM2")
pass
print "Sliver2 has been deleted"
|
Watch video glimpses from Autumn Om and Jadan Ashram, School and Hospital and more on youtube.com or omashram.com.
The Bermuda grass lawn had a lot of love this month as Haripuri from Germany removed large sections of weeds from the east side and cut the lawn with the machine several times... Continue reading about monsoon influence, the sustainable garden and much more on omashram.com.
The Holy Vedas coming to Slovakia – Moments from Paramhans Swami Maheshwarananda's 2012 Slovakian Tour.
First Varazdin Yoga EXPO took place on Saturday, October 13th, organized by "Yoga in Daily Life Varaždin" and under the patronage of Town of Varazdin. H.H. Vishwaguru Mahamandaleshwar Paramhans Swami Maheshwaranandaji blessed participants and the event with his letter of Support and Vice major of Varazdin, Ms Natalija Martincevic opened Yoga EXPO with the best wishes to the event and organizers.
On October 2, 2012, more than 140 people gathered around the peace tree in Bratislava, Slovakia to commemorate the International Day of Non-Violence.
Today is Gandhi Jayanti, the birthday of Mahatma Gandhiji that has been declared by the United Nations as International Day of Non-Violence .
|
# pip install XlsxWriter
# pip install requests beautifulsoup4
import requests
import bs4
import xlsxwriter
import sys
def progress_bar(percent, bar_length=30):
hashes = '#' * int(round(percent * bar_length))
spaces = '-' * (bar_length - len(hashes))
sys.stdout.write("\rPercent: [{0}] {1}%".format(hashes + spaces, int(round(percent * 100))))
sys.stdout.flush()
name_workbook = 'cities_mexico_1.xlsx'
workbook = xlsxwriter.Workbook(name_workbook)
worksheet = workbook.add_worksheet('states')
url = 'http://micodigopostal.org'
response = requests.get(url)
soup = bs4.BeautifulSoup(response.text, "html.parser")
table = soup.find('table')
states = []
states_link =[]
tds = table.find_all('td')
for td in tds:
href = td.find('a')
span = td.find('span')
states_link.append(href['href'])
states.append(span.text)
lenth_states = len(states)
worksheet.write_column('A1', states)
for i in range(0,lenth_states):
worksheet = workbook.add_worksheet(states[i])
url_second = url + states_link[i]
response = requests.get(url_second)
soup = bs4.BeautifulSoup(response.text, "html.parser")
table = soup.find('table')
cities = []
cities_link = []
tds = table.find_all('td')
for td in tds:
href = td.find('a')
span = td.find('span')
if span is not None and href is not None:
cities_link.append(href['href'])
cities.append(span.text)
lenth_cities = len(cities)
worksheet.write_column('A1', cities)
percent = (i+1) / lenth_states
progress_bar(percent)
workbook.close()
print('finished')
|
Knowing the history of a company can change your overall outlook on not just the company itself, but also the people behind it. Here, we will be exploring the history of one UK company in particular; Vodafone.
To start with, the name Vodafone comes from “voice data fone”, and was chosen by the company to “reflect the provision of voice and data services over mobile phones.” The evolution of ‘Vodafone’ began in 1982 with the establishment of the ‘Racal Strategic Radio Ltd’ subsidiary of Racal Electronics plc, which was the UK’s biggest creator of military radio technology. They, in turn, formed a joint venture with Millicom called ‘Racal’, which eventually evolved into the present-day, Vodafone.
After some terms and agreements were settled and finalized, Vodafone was officially launched January 1st, 1985 under their new name of Racal-Vodafone (Holdings) Ltd. The company’s first office was located in the Courtyard in Newbury, Berkshire. After that, Racal Strategic Radio was renamed Racal Telecommunications Group Limited. December 29th, 1986, Racal Electronics issued shares to minority shareholders of Vodafone which totals GB£110 million. Vodafone has officially become a fully owned brand of Racal. In October of 1988, the Racal Telecom majority (held by Racal Electronics); went public on the London Stock Exchange, having 20% of the company’s stock floated. This flotation, in turn, led to a position where Racal’s stake in Racal Telecom being valued at higher than all of Racal Electronics. Under the pressure of the stock market to realise the full value for shareholders, Racal split from Racal Telecom in 1991.
In July of 1996, Vodafone acquired the remaining 2/3 of Talkland it didn’t already own for £30.6 million. In November of the same year, Vodafone purchased People’s Phone (a 181 store chain whose customers were already using Vodafone) for £77 million. And additionally acquired the remaining 80% of Astec Communications (a service provider with 21 stores) that it didn’t previously own.
On June 29th, 1999, Vodafone purchased, AirTouch Communications, Inc. changing its name to Vodafone Airtouch plc. In order to gain antitrust approval for the merger, Vodafone sold their 17.2% stake in E-Plus Mobilfunk. Which gave Vodafone a 35% share of Mannesmann (owner of the largest German mobile network). In September 1999, Vodafone settled to merge its U.S. wireless assets with Bell Atlantic Corp to form Verizon Wireless and the merger was completed in April 2000. In July 2000, the company reverted to its former name, Vodafone Group plc.
In 2001, the Company acquired Eircell (Ireland’s largest wireless communications company) from Eircom and was renamed, Vodafone Ireland. Vodafone moved to acquire Japan’s third-largest mobile operator J-Phone. December 2001 led Vodafone to introduce the concept of “Partner Networks”, via signing TDC Mobil (Denmark). Partner Networks allowed for the introduction of Vodafone international services to a local area market, without the required investment of Vodafone. This was utilized to extend Vodafone’s brand and services into markets where it did not previously have stakes in local operators. Their services could be marketed under the dual-brand scheme, where the Vodafone brand is added at the end of the local brand.
In 2007, Vodafone joined a title sponsorship deal with the McLaren Formula One team, deemed “Vodafone McLaren Mercedes” until the sponsorship ended in the 2013 season. In May of 2011, Vodafone Group Plc purchased remaining shares of Vodafone Essar from Essar Group Ltd for $5 billion. And in December, it acquired the Reading-based Bluefish Communications Ltd (an ICT consultancy company). This, in turn, formulated the core of a new Unified Communications and Collaboration practice within its subsidiary, Vodafone Global Enterprise, and focused on implementing key strategies and solutions in cloud computing, while also strengthening its professional services offering.
In April 2012, Vodafone announced it sought to acquire Cable & Wireless Worldwide (CWW) for £1.04 billion.This would give Vodafone access to CWW’s fibre network for businesses, which would permit it to take over unified communications solutions to larger enterprises in UK (and worldwide). In June of 2012, Cable & Wireless’ shareholders voted in favour of Vodafone’s offer, which exceeded the 75% of shares needed for the deal to proceed. In June the following year, Vodafone purchased the German cable company, Kabel Deutschland, valued at €7.7 billion, and was recommended over a bid of rival company Liberty Global. In September, Vodafone announced they were selling its 45% stake in Verizon Wireless to Verizon Communications for $130 billion, making it one of the biggest deals in corporate history.
To this day, they remain the world’s leading mobile communications providers, having operations in 26 countries and partnerships with networks in more than 55 countries. On a global level, they boast nearly 444 million customers and 19.5 million in the UK. They even made the world’s first ever mobile call on January 1st, 1985 from London to their Newbury headquarters, where they are still located.The company additionally employs over 13,000 people across the UK, of which many are Vodafone customer service and sales representatives. The free number can be found on their official website. With some incredibly smart investments and innovative ideas, Vodafone truly has an astounding history.
|
totalflight = 2503
kmmax = 0
winner = None
reindeers = list()
class Reindeer:
def __init__(self, name, speed, duration, pause):
self.name = name
self.speed = speed
self.duration = duration
self.pause = pause
self.points = 0
def distanceAt(self, flight):
iteration = self.duration + self.pause
km = flight // iteration * self.duration * self.speed
rest = flight % iteration
km += min(rest, self.duration) * self.speed
return km
def __unicode__(self):
return self.name + " " + str(self.points)
with open('input.txt') as f:
for line in f.readlines():
s = line.split(' ')
r = Reindeer(s[0], int(s[3]), int(s[6]), int(s[13]))
reindeers.append(r)
km = r.distanceAt(totalflight)
if(kmmax <= km):
winner = r
kmmax = km
print(winner.name, kmmax, "\n")
for i in range(1, totalflight +1):
roundpoints = dict()
maxpoint = 0
for r in reindeers:
points = r.distanceAt(i)
roundpoints[r] = points
maxpoint = max(maxpoint, points)
for r, point in roundpoints.items():
if point == maxpoint:
r.points += 1
reindeers.sort(key=lambda r: r.points, reverse=True)
print(reindeers[0].name, reindeers[0].points)
|
Jody W. is delighted. Jody is a retired, 80-year-young Redwood City resident who loves her fitness and water-color classes. But getting to them was a challenge.
Jody is not alone. Transportation is on many people’s minds. That’s why people are delighted to find out that 70 Strong can assist them in trying out Lyft, the ride-share service.
And after one free ride to a doctor’s appointment, the cost is only $4 per ride for people who live in the Sequoia Healthcare District, regardless of the distance they travel. That is, people who live in Atherton, Belmont, Redwood City, Redwood Shore, San Carlos, Woodside and Portola Valley and parts of Menlo Park, Foster City and San Mateo (ZIP code 94404).
It’s proving to be popular. People are taking 500 rides a month.
A former tech worker, Jody takes classes several days a week and meets with friends at the Veterans Memorial Senior Center in Redwood City. It was there that she met Theodora Kyle-Singer, 70 Strong’s Lead Community Navigator.
Kyle-Singer was glad to assist Jody with Lyft who had heard about the service but hadn’t been able to try out. With Kyle-Singer’s helping hand, Jody is now a regular rider.
“It is so convenient and it’s so nice,” she says. Plus, she notes, it’s great for her friends who don’t have cell phones.
First, everyone who uses it must pre-register for the service.
70 Strong Navigators introduce people to the folks at the Menlo Park-based Little House transportation hotline who run it.
A Lyft driver can take people to the Veterans Memorial Senior Center in Redwood City or to and from Little House and doctor’s appointments, dentist and other medical services from San Mateo to Menlo Park.
The cost is a flat rate of $4 a trip for people who live in the Sequoia Healthcare District.
For people who live outside the District, standard Lyft rates apply.
70 Strong Navigators can arrange one free ride to a 70 Strong activity.
70 Strong Navigators will gladly teach people how to download the Lyft app and use it on their phones whenever they’d like.
To learn about these and other Transportation options, go to the 70 Strong directory. Or call a 70 Strong Navigator at 650.780.7547.
|
#!/usr/bin/env python
#
# $Id: sadm_getch.py 9424 2011-06-13 18:42:04Z ahartvigsen $
#
# Proprietary and confidential.
# Copyright $Date:: 2011#$ Perfect Search Corporation.
# All rights reserved.
#
def getch():
"""Gets a single character from stdin. Does not echo to the screen."""
return _impl()
class _getchUnix:
def __init__(self):
import tty, sys, termios # import termios now or else you'll get the Unix version on the Mac
def __call__(self):
import sys, tty, termios
fd = sys.stdin.fileno()
old_settings = termios.tcgetattr(fd)
try:
tty.setraw(sys.stdin.fileno())
ch = sys.stdin.read(1)
finally:
termios.tcsetattr(fd, termios.TCSADRAIN, old_settings)
return ch
class _getchWindows:
def __init__(self):
import msvcrt
def __call__(self):
import msvcrt
return msvcrt.getch()
class _getchMacCarbon:
"""
A function which returns the current ASCII key that is down;
if no ASCII key is down, the null string is returned. The
page http://www.mactech.com/macintosh-c/chap02-1.html was
very helpful in figuring out how to do this.
"""
def __init__(self):
# Depending on which version of python we have, and which
# version of OSX, this implementation may or may not be
# available. The Unix impl appears to work on the mac,
# in my testing, so we can fall back to that one if need be.
import Carbon
Carbon.Evt #see if it has this (in Unix, it doesn't)
def __call__(self):
import Carbon
if Carbon.Evt.EventAvail(0x0008)[0]==0: # 0x0008 is the keyDownMask
return ''
else:
#
# The event contains the following info:
# (what,msg,when,where,mod)=Carbon.Evt.GetNextEvent(0x0008)[1]
#
# The message (msg) contains the ASCII char which is
# extracted with the 0x000000FF charCodeMask; this
# number is converted to an ASCII character with chr() and
# returned
#
(what,msg,when,where,mod)=Carbon.Evt.GetNextEvent(0x0008)[1]
return chr(msg & 0x000000FF)
try:
_impl = _getchWindows()
except ImportError:
_impl = None
try:
_impl = _getchMacCarbon()
except AttributeError:
pass
except ImportError:
pass
if not _impl:
_impl = _getchUnix()
if __name__ == '__main__': # a little test
print 'Press a key'
while True:
k=getch()
if k <> '':
break
print 'you pressed ',str(ord(k))
|
Lady Gaga and Irina Shayk both wore black ensembles and sat alongside Bradley Cooper at the 91st Academy Awards — but that wasn’t the only thing they had in common. The Academy Award winner and the model both got camera-ready by using a slew of Marc Jacobs Beauty products. They even used the same mascara, which is being touted as masterful for lasting through Gaga’s touching acceptance speech.
|
from jelly import *
# NOTE: This still has indentation issues
from wikis import quicks_wiki, atoms_wiki, quicks_tail
#quicks_tail to be used to format quicks like so:
# Ternary if
# <condition>
# ...
# <if-clause>
# ...
# <else-clause>
# ...
for k in quicks:
quicks[k].token = k
for k in hypers:
hypers[k].token = k
for k in atoms:
atoms[k].token = k
def is_litlist(literal):
return re.match(str_litlist+"$",literal)!=None and re.match(str_literal+"$",literal)==None
def literal_type(literal):
out = ""
lit = literal
if lit[0] == "“":
if '“' in lit[1:]:
out = ["list"," of %ss "]
else:
out = ["%s"]
if lit[-1] == '”':
out[-1] = out[-1]%["string"]
elif lit[-1] == '»':
out[-1] = out[-1]%['dictionary-compressed string']
elif lit[-1] == '‘':
out[-1] = out[-1]%['code-page index list']
elif lit[-1] == '’':
out[-1] = out[-1]%['literal']
elif lit[0] == '⁽':
out = ["integer"]
elif lit[0] == '⁾':
out = ["2-char string"]
elif lit[0] == '”':
out = ["char"]
else:
out = ["integer"]
return out
def literal_title(literal):
if is_litlist(literal):
equiv = "[" + ','.join(map(mono_literal_equivalent,literal.split(","))) + "]"
name = map(literal_type,literal.split(","))
for k in name:
first = k
break
if all(item == first for item in name):
first.insert(1,"s")
name = "list of "+ ''.join(first)
else:
name = "list"
else:
equiv = mono_literal_equivalent(literal)
name = ''.join(literal_type(literal))
return "The literal "+name+" "+equiv
def mono_literal_equivalent(mono_literal):
evaled = jelly_eval(mono_literal,[])
if type(evaled) == list:
if type(evaled[0]) == list:
if type(evaled[0][0]) == str:
evaled = [''.join(k) for k in evaled]
elif type(evaled[0]) == str:
evaled = ''.join(evaled)
if type(evaled) == str:
evaled = "'" + evaled + "'"
return str(evaled)
def chainsep_title(token):
assert token in chain_separators.keys()
value = chain_separators[token]
return "Start a new "+['nil','mon','dy'][value[0]]+"adic chain"+(" with reversed arguments" if not value[1] else "")
def name(token):
if len(token) == 0:
return ""
elif token in atoms_wiki:
return atoms_wiki[token]
elif token in quicks_wiki:
return quicks_wiki[token]
elif token in str_arities:
return chainsep_title(token)
else:
return literal_title(token)
def token_attrdict(ls):
assert type(ls) in [str,list,attrdict]
if type(ls) == str:
if ls in quicks:
return quicks[ls]
elif ls in atoms:
return atoms[ls]
elif ls in hypers:
return hypers[ls]
else:
return create_literal(regex_liter.sub(parse_literal, ls))
elif type(ls) == list:
return [token_attrdict(k) for k in ls]
elif type(ls) == attrdict:
return token_attrdict(ls.token)
def indent_deepest(ls):
if type(ls) == list:
return [indent_deepest(k) for k in ls]
ls.indentation += 2
return ls
# structure derived from Jelly's parse_code() function.
regex_token_sep = re.compile(str_nonlits + "|" + str_litlist + "|[" + str_arities +"]|")
def parse_code_named(code):
lines_match = regex_flink.finditer(code)
lines = list(lines_match)
lines_str = [line.group(0) for line in lines]
lines_match = regex_flink.finditer(code)
links = [[] for line in lines]
for index, line_match in enumerate(lines_match):
line = line_match.group(0)
chains = links[index]
for word_match in regex_chain.finditer(line):
word = word_match.group(0)
chain = []
for match in regex_token_sep.finditer(word):
token = match.group(0)
token_span = attrdict(token=token, span=match.span(), word_start=word_match.start(), line_len = len(line), name=name(token), indentation=0)
if not len(token):
break;
if token in atoms:
chain.append(token_span)
elif token in quicks:
popped = []
while not quicks[token].condition([token_attrdict(k) for k in popped]) and (chain or chains):
popped.insert(0, chain.pop() if chain else chains.pop())
popped = indent_deepest(popped)
#token_span = indent_deepest(token_span)
chain.append([popped, token_span])
elif token in hypers:
chain.append(token_span)
else:
chain.append(token_span)
chains.append(chain)
return (links, lines_str)
def order(tokens):
if type(tokens) in (list,tuple):
# system to order more naturally e.g. ABC? -> ?CAB [if C then A else B].
# Improve based on quick? Future difficulty: "/" could have two definitions
if len(tokens)==0:
return []
if len(tokens)==1:
return [order(tokens[~0])]
if len(tokens)==2:
return [order(tokens[~0]),order(tokens[~1])]
if len(tokens)==3:
return [order(tokens[~0]),order(tokens[~2]),order(tokens[~1])]
elif type(tokens) == attrdict:
return tokens
else:
return tokens
def order_ranking(ranking):
out = []
for link in ranking:
p = []
for chain in link:
o = []
for token_seq in chain:
ordered = order(token_seq)
if type(ordered) == attrdict:
o.append(ordered)
else:
for k in order(token_seq):
o.append(k)
p.append(o)
out.append(p)
return out
def explain_token(token):
assert type(token) in [str, list, tuple, attrdict]
if type(token) == str:
return [token, name(token)]
elif type(token) == attrdict:
return token
elif type(token) in [list, tuple]:
o = []
for tok in token:
e = explain_token(tok)
o+=[e]
return o
def filter_out(ls, element):
if type(ls) == list:
return [filter_out(k, element) for k in ls if k!=element]
else:
return ls
def form_neat(ranking):
if type(ranking) == attrdict:
return "name: "+ranking.name
else:
return [form_neat(k) for k in ranking]
def explain(code):
ranking, lines = parse_code_named(code)
print("RANKING: ",ranking)
ranking = filter_out(ranking, [])
ranking = order_ranking(ranking)
print("RANKING: ",ranking)
explanation = []
# Iteration form not pythonic but necessary to append lines from parse_code_named. Maybe interleave instead?
for line_num in range(len(ranking)):
line = ranking[line_num]
explanation.append(lines[line_num])
for chain in line:
explanation.append(explain_token(chain))
return render(explanation)
def render(ordered,join="\n\n"):
assert type(ordered) in [str, list, attrdict]
if type(ordered) == list:
# this looks and is horrible. TODO:Change
lines = ["\n".join( [a for a in render(k,"\n").split("\n")] ) for k in ordered]
o = join.join(lines)
return re.sub(r"(\n\n[^\n]+\n)\n",r"\1",o)
elif type(ordered) == str:
return ordered
elif type(ordered) == attrdict:
start = ordered.span[0]+ordered.word_start
return " "*(start)+ordered.token+" "*(ordered.line_len-start-len(ordered.token))+" "*ordered.indentation+" "+ordered.name
test_string = """3,µ+5µ7C
01P?2S?+3
5Ç+5©
P01?
3
1+2+3+4+5
1+2µ3+45
CN$
+5/
+/
SƤ
S€"""
print(explain(test_string))
k = attrdict(a=5, b=3, c=1)
|
Sin Sin Fine Art opened a group exhibition ASSEMBLING On July 22, in which we showcase the works of four unique artists all currently based in Shenzhen. hey come from various countries, ranging from South Africa to the UK, the US and China. And yet they’ve found themselves working in the same city, linked through sometimes serendipitous connections.
In organizing the exhibit, we opted for a theme that would match the “young and fresh” artists themselves. We hope to encourage them with this, their first Hong Kong exhibit, and looks forward to see them continue to grow and develop in the future.
Please click [More] to recollect the memories on the opening night.
This Summer, Sin Sin Fine Art proudly presents a refreshing group exhibition “Assembling” to bring together 4 artists with different cultural backgrounds who all based in Shenzhen, China.
The content of Assembling includes ceramic, glass, installation, multimedia, painting and dance, all assembled to complement and connect with one another. Each artist has their own perspective while sharing the same thread of chance that brought them together.
Click [More] to check the recent exhibition review by "that's" SHENZHEN reported by Bailey Hu.
We live today – not in a luxury world, but a cultural world-where we have to create experiences. Starting from 2016, JOYCE will expand its vision of art and culture and open a new exciting era through the collaboration with top leading cultural partners. The experience will be made through the close partnership with Sin Sin Fine Art.
To bring a maison ambience at the new JOYCE Central, there will be a dedicated corner for Sin Sin Fine Art to create the universe of Sin Sin Man that tells the story of art in day-to-day life. Surprise and excitement in the new space will add value to both our readers and customers. Stay tuned!
Sin Sin Fine Art proudly presents the first solo exhibition in Asia Pacific of renowned Cuban artist Carlos García de la Nuez. – “Cardinals” opening on 23 September.
A member of one of the key visual artist groups of the period, "4 x 4", Carlos Garcia de La Nuez emerged from the rising generation of the 1980s that stood for a separation from the traditional schools of Cuban Painting. Garcia de la Nuez's aesthetics celebrates Cuban's national art, based on the language of gestural expression with influences of abstract art. The artist uses spots, textures, and signs of other subtle techniques as lyrics to amplify elemental themes of tension and equilibrium with indubitable artistic vigour.
*As one of "2016 Reach For The HeART" fund raising programs, partial proceed of "CARDINALS" will go to the charity fund.
Sin Sin Fine Art is pleased to present “ELEVATED” by Japanese Artist Honoka Kawazoe, the emerald discoverer.
Her concept is to create the bond between people and stones by using raw emerald in her creations.
The connection comes alive by making jewellery according to the character and nature of each person and individual gemstone.
She has been exhibiting world widely since 2003 and now she’s ready for her new journey to Hong Kong.
*As one of "2016 Reach For The HeART" fund raising programs, partial proceed of "ELEVATED" will go to the charity fund.
Vincent Cazeneuve born in Toulouse, France had made his way to Chongqing, China 9 years ago. Lacquer painting, one of the most ancient arts in China, acted like a strong magnet to him. Vincent has dedicated all his passion to live in China and with his own language deciphering the work of lacquer, learning and drawing lessons from traditional Chinese lacquer paintings. Simultaneously, he blends his own creativity into Chinese lacquer traditions.
Please click [More] to view Vincent's recent artworks on Artsy.
"Learning with the Maestro" is a program fro the Secretary of Education and Culture of Indonesia to have 10 Maestros of each profession to share their knowledge and experienced to the high school students. Putu Sutawijaya is selected for the Maestro of Fine Art beside others Maestros from Musics, Dances, Singers and Acts.
It is an introduction of arts activities directly to young people who want to learn to be an artist. The goal is to encourage the creativity side of the students and share the art activities to them, to let them understand that the art world has the same spirit to any other professions. As an artist the Maestro feel that they need to share the knowledge and experience to the peoples who are curious to know about art.
International Semarang Sketchwalk 2016 is an extraordinary sketch festival which will be held at the three historical areas; showcasing the beauty of cultural heritage buildings for the participant to capture. This event started as monthly event since July 2014, than in August 2015, a group of Semarang sketchers and architects decided to transform the local monthly event into an international sketching festival. Both locals and international participants are welcome to register latest by 10 August 2016.
In this exhibition the artists go into dialogue with each other and show a tangible reflection in the sphere of Indian summers.
Eddi Prabandono from Indonesia - known for its beautiful installation at the Venice Biennale in 2015 - has tried to combine Nature and machineries in the theme. Because the location is along the highway, and there are a lot scrapyards.
When the works are completed, they will be included in Land Art Delft collection. The collection now contains seventy artworks which a large number of works by Joost Barbiers.
11-13 August Ombak Bali International Surf Festival at La Plancha, Jalan Mesari Beach, Seminyak. Ombak Bali in itself becomes a platform for surf filmmakers to promote their work, and aims to promote awareness of social issues and the environment by screening inspirational surf films.
12-13 August 2016 Ubud Village Jazz Festival at ARMA Museum Ubud, Jalan Raya Pengosekan, Ubud. This year's community Jazz concert carries the theme, 'Embracing Uniqueness' and among the notable artists slated to perform this year include the East West European Jazz Ensemble featuring Gregory Gaynair from Germany, Peter Bernstein from USA, Piotr Orzechowski from Poland, Youn Woo Park Trio from South Korea, and a big lineup of Indonesian performers such as The Daunas, Arman & Penina, Bali Gypsy Fire and Mia Samira.
19-21 August 2016 Jazz Market by The Sea at Taman Bhagawan, Tanjung Benoa the festival will feature top Indonesian Jazz artists and performers.
24-28 August 2016 Sanur Village Festival at Sanur Village, This year's festival is slated to feature a SanurRun on 23 August, a Sanur Creative Expo, Agro tourism Expo, music performances and art on 29 August, and a highly anticipated cultural street parade.
26 – 29 August 2016 Legian Beach Festival at Jalan Pantai Legian, present its annual offering of fun activities, day and night.
|
#!/usr/bin/env python3
import sqlite3
import pandas as pd
from scipy.stats import chi2_contingency
cnx_lda = sqlite3.connect("1_31_LDA.db")
cnx_sentiment = sqlite3.connect("2016-01_sentiments_annotated.db")
# get topic distribution over stories
_ = pd.read_sql("SELECT * FROM [1_31_LDA]", cnx_lda)
topics = [str(i) for i in range(100)]
df_lda = _[topics]
topics_lemmas = _.loc[_.index[-1]][topics]
df_lda.index = _['story_id']
df_lda = df_lda[:-1]
# get emotion vectors
_ = pd.read_sql("SELECT * FROM [2016-01_sentiments_annotated.db]", cnx_sentiment)
df_emotions = _[['negative', 'ambiguous', 'positive']]
df_emotions.index = _['story_id']
def controversy(topic, cutoff_topic=.1, df_emotions=df_emotions, df_lda=df_lda):
# retrieve all relevant story ids for given topic
story_ids = list()
for row in df_lda.iterrows():
if row[1][topic] is not None:
if float(row[1][topic]) > cutoff_topic:
story_ids.append(row[0])
story_ids = set(story_ids)
# retrieve all emotions vectors for relevant stories
emotion_vectors = list()
for row in df_emotions.iterrows():
if str(row[0]) in story_ids:
if row[1].values.sum() > 0:
emotion_vectors.append(list(row[1].values))
# calculate divergence
if len(emotion_vectors) > 2:
_, p, _, _ = chi2_contingency(emotion_vectors)
print("topic " + topic + ": controversy score: " + str(1 - p))
return (1 - p), story_ids
else:
print("topic " + topic + ": not enough stories with emotions vectors in that topic")
return 0, story_ids
# evaluate for each topic
stories = list()
controversy_scores = list()
for topic in topics:
score, ids = controversy(topic)
controversy_scores.append(score)
stories.append(ids)
df_topic_controversy = pd.DataFrame(index=topics)
df_topic_controversy['controversy'] = controversy_scores
df_topic_controversy['lemmas'] = topics_lemmas
df_topic_controversy['story_ids'] = stories
df_topic_controversy.to_csv("January_controversy_scores.csv")
|
Arneb (AKA-56) was laid down under a Maritime Commission contract (M. C. Hull 1159) as Mischief by the Moore Drydock Co., at Oakland, Calif; launched on 6 July 1943; sponsored by Mrs. Carol J. Palmer, the daughter of a plant engineer; acquired by the Navy on 16 November and towed to Portland, Ore., where she was converted to an attack cargo ship by the Willamette Iron and Steel Co.; and commissioned on 28 April 1944, Cmdr. Howard R. Shaw in command.
Outfitted and loaded with stores for her first cruise by 10 May 1944, the attack cargo ship steamed to San Diego for shakedown training which was made unexpectedly interesting by her rescue of the three-man crew of a Navy Grumman TBF that had had to "ditch". During June .and July, the ship practiced amphibious maneuvers using Army troops to make landings on San Clemente Island.
On 22 July 1944, Arneb sailed for the Hawaiian Islands, and arrived at Pearl Harbor on 30 July. After debarking passengers, the ship continued on to Guadalcanal for training. On 29 August, the ship got underway with three transport divisions to rehearse landings for the invasion of the Palau Islands. On 8 September, she sortied with Transport Division (TransDiv) 32, and headed for Angaur Island.
The cargo ship arrived on 17 September 1944 and lowered all of her boats off the west side of the island to feign landings in that quarter in an effort to divert Japanese defensive forces. The next day, she actually landed troops and equipment of the 306th Engineers. Arneb remained in the Palaus. until 23 September, when she began carrying cargo and troops to Ulithi, Hollandia, and the Admiralty Islands.
At Manus, she fueled and loaded supplies for training and rehearsal exercises for her next operation, the liberation of the Philippines. She got underway on 12 October 1944, arrived off Leyte on the 20th, and, despite enemy shelling, immediately began discharging, her cargo and troops. Arneb next steamed to Guam to take on more cargo and troops for delivery at Leyte on 23 and 24 November.
Following her second voyage to Leyte, the ship steamed to Hollandia to onload provisions, cargo, and personnel as well as to receive minor repairs. Arneb departed, Hollandia on 27 December 1944 to participate in the invasion of Luzon, anchoring in Lingayen Gulf on 9 January 1945. Since she was not carrying high priority cargo, her boats helped transports in landing troops and cargo on D-day before they began unloading her own cargo the following day. Although enemy air and small craft activity was intense, Arneb only lost one LCVP. She returned to Leyte on 15 January and ferried troops and supplies to Luzon for the assault on the area around La Paz on the 29th. During the next few weeks, the vessel took on fuel, cargo, and other supplies in Leyte Gulf in preparation for her next major task, the invasion of the Ryukyus.
On 27 March 1945, Arneb left Leyte Gulf, arrived off Okinawa on 1 April, and unloaded supplies despite enemy air attacks. She retired to Guam and was ordered to proceed on 10 April via Pearl Harbor to the United States. The ship arrived in San Pedro on 3 May and was given a 15-day availability. Then, after loading ammunition and other supplies, she sailed for Pearl Harbor on 8 June. She returned to the west coast before the end of the month and moved into dry dock at the Moore Dry Dock Co. On 20 July, the cargo ship was once again headed for the Hawaiian Islands on the first of two voyages made before the end of August. During the ship's second run to Oahu, Japan capitulated, ending the fighting in the Pacific.
While in Pearl Harbor on 28 August 1945, Arneb received orders to load cargo and sail for the China coast to support the occupation forces. She ferried cargo and troops between Okinawa and China until 26 October, when she headed for San Francisco. Diverted to Seattle en route, she arrived there on 13 November 1945. The ship was then assigned to the Naval Transport Service and made cargo runs between the west coast and the Far East until December 1947.
Placed in reserve at the Philadelphia Naval Shipyard on 16 March 1948, Arneb was modified to prepare her for polar operations. Equipped to become Rear Adm. Richard E. Byrd's flagship for a planned Antarctic cruise, she was recommissioned on 19 March 1949. Following shakedown training out of Guantanamo Bay in April and May, Arneb cruised in the North Atlantic from June to October to test the effectiveness of the cold weather equipment installed. After her return to Norfolk on 1 November, the ship trained in Chesapeake Bay.
Arneb, needed to supplement the Sixth Fleet in the Mediterranean in early 1950, returned to the east coast in May, and underwent a three-month availability. She resumed normal training and support services for the Atlantic Fleet when the Korean War compelled postponement of the Antarctic expedition. Nevertheless, the ship utilized her cold weather gear from March to December 1951 when she rendered logistic support to naval activities in England and North Africa, including an amphibious training operation in Greenland.
Until March 1955, Arneb cruised primarily in the warm waters of the West Indies. From January to April 1952, the transport ferried cargo between islands in the West Indies. After a yard overhaul, she participated in the lengthy, large-scale NATO Operation Mainbrace in the North Atlantic and Mediterranean. After her return to the east coast in February 1953, Arneb made six cruises to the West Indies, before beginning preparations for an extended operation at Antarctica.
As a preliminary trial before her cruise southward, Arneb participated in an operation in waters north of the Arctic Circle along the east coast of Baffin Island in August and September and then returned to Norfolk for final outfitting. On 14 November, Arneb got underway as flagship of Operation Deep Freeze I that would allow her to claim the distinction of crossing both the Arctic and Antarctic Circles in the course of one year. She transited the Panama Canal on 20 November, stopped at New Zealand and Franklin Island before arriving at Kainan Bay and McMurdo Sound, where she stayed from 27 December 1955 to 30 January 1956. She returned to the United States via the Indian Ocean, the Mediterranean Sea, and the Atlantic Ocean, completing her circumnavigation of the globe upon her arrival at Norfolk on 5 May 1956.
After undergoing an overhaul from May to August and refresher training at Guantanamo Bay, Arneb was prepared for Operation "Deep Freeze II." She departed Norfolk in November; stopped at Wellington, New Zealand; entered the ice field on 16 December; and rendezvoused with the Coast Guard icebreaker Northwind (WAGB-282). Arneb experienced no difficulty in following the icebreaker during the first day of movement through the frozen sea; but, on the 18th, a quarter-inch crack, apparently caused by contact with ice during the previous two days, appeared in her hull running some 31 inches above and below the waterline. Arneb's men repaired the damage, enabling the ship to make slow but steady progress toward McMurdo Sound, where the ships arrived on Christmas Eve.
Upon completion of their work there, the two ships returned to Cape Hallett, where Arneb moored to the ice while Northwind proceeded into Moubray Bay to clear an unloading site. On the last day of 1956, the ice pack into which Arneb was nosed began to move and soon surrounded the ship with solid ice pressing against her hull. The framing on both sides of the ship began to buckle, rivets popped, seams split, and beams ripped. Frigid water and ice began flooding into several cargo holds at a combined rate of 1200 gallons per minute. Damage control parties worked doggedly to contain the inrush of water, but the men were only able to stay in the water for a few minutes at a time. Nevertheless, by using mattresses, steel plates, and shoring timbers, they managed to reduce the flow of water until the pumps could lower the water level.
On 3 January 1957, the ice pack had loosened; and enabled Northwind to lead the battered Arneb into port. After unloading the cargo, the crew repaired the cracks and split seams by listing the ship alternately to port and to starboard. Although having suffered a bent rudder post and a broken propeller blade, Arneb was able to continue the operation.
No further mishaps occurred until 30 January when Arneb, the icebreaker Glacier (AGB-4), and USNS Greenville Victory (T-AK-237) attempted to push through the icepack off Knox Coast. A large chunk of ice broke off and brushed Arneb's port side, ripping a gash 12 feet long and one-half inch wide and once again flooding the holds as well as buckling plates, popping rivets, and opening seams. The experienced damage control parties used the same techniques to patch the new wounds in her hull. The ships then got underway again, with Glacier towing Arneb. Early the next morning, they arrived at Knox Coast and once again, the damage was repaired.
Arneb left the ice fields on 17 February and steamed to Sydney, Australia, without incident. There, she went into drydock and, after minimum repairs, got underway on 28 February 1957 for the continental United States.
In spite of her troubles with ice damage, Arneb made five more cruises to Antarctica to resupply the research stations and to transport hundreds of scientists involved in research on the frozen continent. During Deep Freeze 61 she even delivered the foundation of a nuclear power plant to McMurdo Sound. Following Deep Freeze 63, Arneb was modified to enable her to return to normal duty with the Amphibious Force of the Atlantic Fleet. She underwent intensive training in amphibious operations through participation in major Caribbean exercises. In 1965, she transported much-needed supplies to American forces operating in the Caribbean during the crisis in the Dominican Republic.
Arneb began a routine of operations in Atlantic and Caribbean waters and practiced with Navy and Marine Corps personnel in actual landings at Onslow Beach, N.C., and Vieques Island, Puerto Rico. During one such exercise, LANTFLEX 66, 94 Atlantic Fleet ships took part in a three-week opposed approach, landing, and departure from Vieques under the surveillance of a Soviet intelligence-gathering trawler.
Between 8 and 22 February 1967, Arneb was in drydock at the Bethlehem Steel Corp., in Baltimore. She then moved to the Berkeley yards of the Norfolk Shipbuilding & Drydock Corp. for the remainder of her overhaul. With the overhaul completed and following refresher training during the summer of 1967, Arneb resumed her standard operating schedule of local Atlantic coast operations.
Arneb deployed to the Mediterranean in January 1968 and spent five months there as part of the 6th Fleet's Amphibious Ready Force. In August 1968, the cargo ship became the first amphibious ship and the first AKA qualified for spacecraft recovery duty, and she was on station as the secondary recovery vehicle for the Apollo 7 flight in October. On 1 January 1969, Arneb was reclassified LKA-56.
Arneb made three more Mediterranean cruises in 1969 and 1970 and participated in numerous Caribbean exercises before the Navy decided to end her naval service. Rather than inactivate and preserve the worn old ship, the Board of Inspection and Survey for the Atlantic Fleet recommended that Arneb be disposed of by sale. She was decommissioned at Norfolk on 12 August 1971, and her name was stricken from the Navy list the following day. She was sold on 1 March 1973 to Andy International Inc. of Houston, Texas, and scrapped.
|
#encoding: utf-8
"""
Nothing to say
"""
from __future__ import print_function
import re
import requests
from BeautifulSoup import BeautifulSoup as soup
from sqlalchemy.orm import sessionmaker
import sys
from Tasks import getContent
import DataModel
from DataModel import Salary
import datetime
import time
LAGOU_JOB_URL = 'http://www.lagou.com/jobs/{0}.html'
def getLagou(i,p={'job_description':('dd',{'class':'job_bt'}),
'job_request':('dd',{'class':'job_request'})}):
return getContent(LAGOU_JOB_URL.format(i),patterns=p,ignore_pattern=['div',{'class':'position_del'}])
#DBSession = sessionmaker(DataModel.engine)
get_salary = r'(?P<low>[\d]{,3})k-(?P<high>[\d]{,3})k'
salary_reg = re.compile(get_salary)
SAVE_CSV = './save_file_' + str(datetime.datetime.fromtimestamp(time.time())) + '.csv'
def saveLagou(job):
res = re.match(salary_reg,job['job_request'])
try:
with open(SAVE_CSV,'a+') as f:
res = res.groupdict()
salary = (int(res['low']) + int(res['high'])) / 2
jd = job['job_description']
f.write('{0},{1}\n'.format(salary,jd.encode('utf8')))
except Exception,e:
print(e)
if __name__=='__main__':
for i in range(0,999999):
try:
saveLagou(getLagou(i))
print('\r {0} id Finished\r'.format(i),file=sys.stdout)
except Exception,e:
pass
|
Jeane: In my dream, it feels like I’ve gone somewhere. It almost feels, well, I guess they speak English, but it’s like someplace I’m not familiar with.
I’m trying to investigate, there was a man that was shot, and he was part of, I don’t know, something like a militia, or a police force, they hang together a lot, and you have the feeling that whoever shot him was part of one of them, so you’re trying to figure it out.
And so you never know whether the people that are trying to help you track down what happened are part of what caused what happened, or are they not? So whatever gathering you go to it feels like you’re always trying to figure out who was part of the problem.
When we go back to the residences, which look really poor, I mean you feel like you’re in a third world country, sometimes people will do really odd things. Like they throw something up, almost like try to anchor it over the window, and pull up, and I’ll pull it in rather than let them climb in the window, and then when you try to go out and find your way around it’s like the hallways are like a maze. You can get lost.
And I’m beginning to kind of get a sense of maybe where the problem lies and identifying certain people, but I’m not sure. At one point I go out into the woods and I go to an area where there’s a school, and then out into the woods again, and I’m looking for some information.
But it’s really hard when I first go out across some lawn because this place has these snakes that are molting. They’re kind of slimy like slugs, and they’re a little more wild if they’re not in a pair. If there’s a pair, then they’re better, but it’s like maybe if you don’t watch it like they’ll jump up and they almost get to your hand.
And they seem to multiply all over the grass, so to walk amongst them is kind of creepy. But I seem to make it through that, and I’m with a couple of other people, a guy and someone else, and we’ve gone by an old school and I feel like I’m beginning to zoom in on some information.
I’ve even maybe seen a file that somebody didn’t mean for me to see, which is giving me information on a family that seems I feel will help me zoom in on who did this, but I’m not sure. So I’ve gone somewhere and I’ve kind of let it leak that I have this information, and I’m waiting to see who shows up – figuring that will help me identify.
And I’m surprised because I see someone come that I thought was one of the good guys, Israeli actually, he’s carrying something when he comes in, it’s kind of like a gooey cinnamon bun, and he shares a little bit of it but not much. So then I take a bite and then he says, well, he actually didn’t want to share much of it because he was going to take it home to his family.
And I said, “Oh, well, if I had known that like I wouldn’t have taken a bite.” Then I also feel like he’s there to see the information. Anyway, that’s when I woke up.
John: The dream repeats, and repeats, and repeats again and again the same energetic conundrum, which had to have left you feeling awkward because the issue is not something that is associated with the situation in front of you. The issue is associated with an energetic imbalance that is preexisting.
Now, these energetic imbalances often times get set up when you fail to follow a proper adab, or you fail to take proper responsibility with something and you just let it hang fire, or you determine that it’s okay to ignore, or that it’s somebody else’s responsibility to take something into account and so you let it hang fire.
There are all kinds of reasons why one fails to be attentive. In some cases the issue can be bigger than yourself and is still a lingering energetic imbalance based upon the way something just happens to be in the collective. And all of that, then, has an effect that continues to permeate out and affect the situations as they unfold in life.
And so, in other words, this is a subject that goes from grosser to subtler. You’re approaching it as if, though, it has a more direct consequence that you can put your finger on. But that consequence still is an energetic stigma or effect that is outside of each scenario.
In other words, it’s like some little thing happens, let’s say, that throws you off, and then you continue to proceed in a whole other setting and yet the effect of what preexisted before continues to influence, or change, or alter, or create connotations to whatever is continuing to unfold as a result of some unfinished, upstream business.
And so you’ve forgotten, or aren’t paying any attention anymore, to whatever it was that may have set off the stigma or the energetic qualm, or jinx, or spell and you’re only trying to look at what needs to be contended with based upon the current situation in front of you.
And there is no answer for that current situation in front of you, because the current situation in front of you is created by an energetic that is just a tone, or manner, or mode in the overall. And yet you try to solve it in the specific, and it can’t be solved in the specific because it’s like a reciprocal effect of what had taken place, or occurred, earlier in which you didn’t apply, or reach, or bring in the proper adab, or responsibility, or communication or whatever it was that should have, and could have, taken place at that time.
That’s how it is in the denser way of looking at how karma in a flow, and in an unfoldment, gets set up. In a subtler way, it can be a vibration that’s very, very, very complex, just like, for example, the oil spill is said to be caused by all of us in some sense because we all participate in receiving the fruits of oil as part of what we take in as being essential to our way of life now.
And so, as a consequence, that creates a certain clamor that leads to certain consequences, and so we’re not quite clear or pure in terms of being able to deal with that with a clear mind and a clear energy, because we are clouded by the fact that we have participated in those fruits in a personal way.
And so that’s what the thing can look like in a real, real subtle way. However, your dream had the whole feel that it was dealing with something that went awry, and as it went awry created image after image, and scenario after scenario, that was just out of touch with what needed to be in synced energetically for there to be a cohesive unfoldment.
This has a deeper meaning, too, in that this is how consciousness is limited and/or kind of works. For example, I noticed long ago that I could get into a certain headspace energetically, vibrationally, in which I could see things while in that particular context. But if I slipped out of that context, and took in something more as a bigger scope, then the perspective that I had in this prior, more streamlined, parameter context, which was true under that schematic, and under those parameters, and under those particular conditions, would no longer be true in a bigger scope, in a greater wholeness.
And so this is a whole level of a kind of communication that exists, that one has to learn how to be in touch with as part of the way that one exists and lives in a world that is intertwined energetically. And that every vibration has a reciprocal cause and effect in the outer, which continues to continue until it runs its course.
And karma is created when events occur that continue to carry malinger effects of unfinished business, which then cause things to continue to reverberate. And those things that reverberate, and you try to solve those based upon just the reverberation in its dense manifested way, is close to impossible because you have to back up and free yourself, and truly let go, in a completely clean way, that energetic. But you didn’t do it at some point, who knows when in your past, and so now you have the consequences of confusion in the present – or in the future even.
And so what you have done is, you’ve kind of in your image portrayed a little bit about how a person puts themselves into pain, and into karma, based upon a type of taking for granted, as opposed to staying fully responsible and conscious when you know you should have been, in some regard or another, at some other point in time. And when you didn’t do it, then you set in motion these chain of events that just kept haunting you, and haunting you, and haunting you.
Saying it like this is describing it in the grosser, but then when you look at the oil spill, I mean you’re going back hundreds and hundreds of years to see where the seed of that imbalance got put in place. And thus the true letting go is to let go of everything, so that something new can shift and if you cannot let go of things even like that, then whatever can be potentially shaped is still going to have to include that as an unfoldment in the process of life as it goes forward.
So this is a huge, huge subject of which what you’re experiencing is the tip of the iceberg.
John: Well, in the next dream, I’m driving off of an interstate into an exit, and up ahead of course you have to stop and then go left or right. And so you have to slow down because your on the interstate you’re maybe going 80 miles an hour, and you have to slow down to 35, 40, whatever and come to a complete stop.
But for some reason I can’t get my mind to engage to hit the brakes to stop, and I can see the stop way up ahead. I have plenty of time. In fact, you know, if I don’t do it right at this moment, I can do it at the next moment, or the next moment, but I seem frozen and out of sync. I can hit the clutch, but for some reason or another I can’t get my foot to go onto the brake.
It’s as if I can’t get my mind to respond in unison. And you comment on how fast I’m going and I say, “I know, I can’t get my mind to respond.” This is very miserable. This isn’t funny. It’s very miserable. I mean I’ve been doing my darndest, you know. I was even reaching down trying to hit the pedal with my hand, but how can you see the road and hit the pedal with your hand and stop the thing?
You know, none of it was working. Was I going to crash? And so the sensation of course is a horrible, horrible, helpless sensation. In other words, I know how one needs to be, but I’m unable to come into the body or something, or bring the focus I need in order to do something mundane in manifestation.
I freeze up. It’s as if I’m freezing up in the outer because somehow or another I’m not grounded or I’m not in body or something, therefore not properly focused. So the dream is indicating that, as a consequence, there’s a lack of acuity that happens if I don’t come properly back into the body.
So, apparently, I’m remaining a little bit too much in this other transcendent space and inner plane in which the indulgence there does nothing for conduct that needs to occur in the outer.
It’s like the dream is saying that to be responsible I need to be familiar, more familiar, in both levels simultaneously.
In other words, one level has to do with sight where I can see ahead what needs to be, I have to stop, and the other level has to do with the mundane action in the physical where you have to be able to be grounded enough to make all that happen. In other words, it comes all the way down. You can see the potentiality but it also has to be lived out.
In the next dream I’m like, I don’t know what I am, 17, 16, 18, I don’t know, maybe a young adult, anyway I’m at home visiting my dad, my folks. It’s like I’m on vacation. And for some reason the talk gets around to bird hunting, or maybe that’s just suddenly what seems to happen, because I suddenly look out the window and there are all these pheasants.
Now you rarely see pheasants, and a pheasant is kind of an exotic bird, you know, that has a whole glory about itself – in terms of its beauty, and its presence, and its demeanor. A pheasant is a little bit like a peacock, too. It can be a little strident.
So I ask, where’s the shotgun? Because I’m going to shoot one of these pheasants. And my dad says he doesn’t know where it is at, so then I ask about the 22 and get a similar response. So what about the 22 Hornet, his favorite gun? I know he knows where that’s at. I will make a head shot.
In other words, because you don’t want to shoot it with a bigger gun and blow it all to pieces so that there’s nothing there to appreciate, and he won’t get it. So, as the dream progresses, I go outside because, you know, I want to look at the pheasants more I guess, and I’m amazed how close I can get to them.
“Look how close that pheasant is.” The idea of me shooting it in the head seemed a little bit farfetched because usually you can’t get close enough, but there I am. It would be a simple, easy head shot.
Then I look closer at the pheasant. It’s a mother with a young one. Then I look at another nearby pheasant, and it too has a little one. At some point I begin talking to the magical pheasant because it suddenly has a head like a human. I’m amazed that I can rapport like this. I would’ve missed out on this connection if I had had my way.
But apparently I haven’t dropped this idea, this notion, this mannerism of how you have to be, because I mean I have these memories, wonderful memories, as a kid going out and bird hunting, but I never did much big game hunting. My dad did most of that. He never bird hunted. I was the one that liked to bird hunt. I always did these other little things.
So in the next dream I’m going hunting with my dad and other family members. So after driving way back into an area we get out of the vehicle and we start walking. And when we finally reach an area where there’s a snow line, that’s when I realize I’m not dressed properly. I don’t have walking shoes on. I have just regular loafers or something.
So I pause as the others go on, and I seem to get distracted because they disappear. Lots of time seems to somehow go by, and I suddenly realize there’s no way I’m going to be able to catch up, and of course they just have to keep going assuming that I’ll figure it out and catch up.
But so much time has gone by that I know that there’s no way I can catch up, plus I don’t know how to go from where I’m at back to the vehicle. But suddenly where I’m at, hanging out, there are other people, kids playing on the hillside and stuff all around, which indicates to me even more that I just don’t know where I’m at.
And then suddenly I realize, I can hear it now, my dad will be furious when I finally get out of this mess. He will probably say, I’m not going hunting with you ever again. And, given how I now feel about all of this, that would be okay with me, although when this happens I will probably act hurt. And so I wake up.
I can’t stand the condition I am in. You know, I’m lost, I don’t know how to follow out, I don’t know how to get back, it’s actually starting to cool down, it’s like 3:00 or 4:00 in the afternoon and the light’s starting to fade. If I were to try to go forward it would get dark on me and I’d be in real bad shape, and if I try to go back who knows where I’m going to end up, and no way they’re going to know how to find me.
So the meaning of the two dreams is, what I think I like based upon old memories of bird hunting that I enjoyed as a kid is not where I’m actually at now. When I settle back from the mannerisms, I’m able to appreciate the relationship I didn’t know I had, that has opened up inside in ways that I hadn’t realized was there, or possible.
But I still haven’t shaken certain mannerisms of the past. And, as if to get that out of the system, then I indulge in the sport by going big game hunting, something that I really don’t have a good connection with. I had the connection with bird hunting, and I can’t even do it in a sub-fashion, in other words, or as an octave, because something has changed.
I’ve lost the impulse to go forward in this mannerism. I know that others may even expect me to continue this type of pattern, but my heart just isn’t into it. I simply stop because it isn’t worth the effort. Consequently, in relationship to the mannerism or the pattern that exists, I’m lost.
In other words, I would think and seem to know where I’m at if I stayed in that pattern, but when I don’t let go of that kind of familiarity I go into kind of a condition in which I’m confused in the psyche as if, you know, I’m doing something wrong because I let go, I’m not adhering to the impulse. And so you can develop a self-consciousness, even.
Well somehow somewhere there’s a happy medium, but I’ve not found that for myself. What we are talking about in the dream is a state in which I handle the inner and outer in a way that needs to be balanced, and I haven’t found that yet. I’m still estranged from a necessary value that comes from the heart, which eludes me because I’m still caught up in preconceived ideas, and notions, and mannerisms, and habits.
So the dreaming seems to be about habits and mannerisms and whatnot trying to thread the needle, or trying to reconcile two places, and having to contend with the various peculiarities that one’s adopted over time that keeps them from being able to do that.
In other words, it’s like when you take the idea of hunting, I guess there’s a certain aspect that I’m indulged in, but not in the bigger sense that most people are indulged with their ego, which has to do with big game hunting.
I mean most people could care less about bird hunting, and yet I found that fascinating but, on a subtler level, even that had a certain absurdity because I was pushing myself off and away from an intertwinedness that one needs to see in relationship to everything being connected and alive – in terms of one’s self.
And that these birds that I was hunting were all reflective of a condition of myself, too, just like big game hunting is like that, only it has to be done with a certain level of awareness, a divine awareness, and then it can be okay. You have to have a relatability to everything in life, as opposed to just a rightfulness that you superimpose.
And of course the meaning of all of this is I’m being confronted to be more true to my heart. My abstractions are not where it is at. And so if I can hold the value, I can actually then play act in the outer, with an acuity to the inner – that’s where this is going. That’s what this dreaming is like. These are components in that direction. Whether I quite get the memo, or not, this is still a big gulp to take.
John: So in the meditation dream, I seem to be going back and dealing with the polarities of being in the outer, versus what it’s like in the inner.
And from the meditation dream I can see that on the inner level there’s this area that is subtle and flexible. It’s able to handle a lot of change and diversity and, as a consequence, there’s a fluidity there that seems to undulate with life.
And then, by contrast, if I look at how it is that I look at things and relate to things in the physical, or in manifestation, that seems very linear, or I don’t seem to appreciate or adapt to outer changes quickly enough. As a result, I experience life in a way that is more slowed down and narrower.
It seems that, over time, a gap has formed between these two states, my higher self and my lower self, you might say, or the inner and the outer. You could say that the density of the outer in the physical is functioning at a slower speed, and maybe one way of saying that slower speed is it’s accessing life as if it was created just out of sound, and therefore it’s not seeing behind that the true light.
And you could say that on this other inner level, the veils lift somewhat and so that there is a greater perception, a greater attunement to what is meant to be. I guess you could say then that this is akin to a greater speed, and a freedom, because you get glimpses that aren’t possible, or so it seems, when you’re in the veils of manifestation.
However, I also can’t help but recognize that the two states are reflective of each other, but it’s at what feels like different points in time. I was trying to figure out how it is that I can explain this, for I have a sense of knowingness about what’s about to take place, yet in the physical you can blank all of that out and act as if nothing is that big of a deal.
And so I can’t help but notice that the experiences in the outer narrow down or, in other words, are more drawn out, and have this sense of greater proximity, or more hands on. Yet on the inner the experiences approach a kind of central unfolding overall vibration that kind of emanates throughout everything, as if you’re gaining some sort of access to something that permeates or intertwines with everything, as if it’s in light, I guess is the way you’d say it.
Because sound is manifestation, and light would be something that could traverse a greater domain. In that there is a quality of a potential and intended destiny. So when you try to reconcile the two states, you end up with kind of a condition as a person that’s a little ominous, because in the outer there’s just a sense if you force yourself to look at things more closely, there’s a sense in the bones of a bigger picture unfolding from the inner – that is, if you have some sense of that awakening.
Otherwise, you don’t have that connection at all, and you just go about as if everything is going to be okay. But to the higher self, to the degree to which there is this inner connection at this time, which is creating a shift, you can’t pretend anymore. You can’t get lost in the minute detail of the outer that keeps you from seeing a bigger picture.
So as things unfold laboriously in the outer, that pace that you see when you have a more speeded-up quality of the inner is kind of nauseating because it’s acting delusional.
So there’s this peculiar dilemma of, how do you cope with both? Because when the higher self can surrender and accept the outer, instead of seeing it as nauseating, and slowed down, and dulled out, and full of veils, and estranged from what needs to be, if the higher self can accept the outer, a wonderful receptivity is possible as the impulses that come out of the light into a physical manifestation make for a more direct, even though it’s slowed down, experience of the divine, which may be more veiled, however, because of the denseness of things but, because of the immanence, in another way it’s almost more graspable.
So, for those who can bring the two states closer together by bridging the illusory barrier between light and sound, there is the heightened and greater aliveness of an intertwined universe.
To become more intertwined is to become nearer and dearer to the essence of the self where nothing is going on. This is where the beginning and the end are One, as this is where pure experience is at rest.
|
#!usr/bin/env python
# -*-coding:utf8-*-
from datetime import datetime
from server_logger import log
__doc__ = """
* Account class
"""
class Account:
def __init__(self, name, email, password, account_type):
self.name = name
self.email = email
self.password = password
self.type = account_type
self.money = 0
self.passbook = []
def datetime_now(self):
"""
For format reference of datetime, refer to table at
the end of following page:
https://docs.python.org/2/library/datetime.html
"""
datetime_now = datetime.now().strftime('%b %d, %Y %I:%M %p')
return datetime_now
def deposit(self, amount):
self.money += int(amount)
tx = (self.datetime_now(), amount, 0)
self.passbook.append(tx)
return "Money deposit Succesfull! New Balance Rs. " + str(self.money)
def withDraw(self, amount):
if int(amount) <= self.money:
self.money -= int(amount)
tx = (self.datetime_now(), 0, amount)
self.passbook.append(tx)
return "Money withdraw Succesfull! New Balance Rs. " + str(self.money)
return "Sorry you can withdraw max of Rs. " + str(self.money)
def login(self, password):
if password == self.password:
return "Login Successful", self.type
return "Login Unsuccessful", "Invalid"
def changePassword(self, password):
self.password = password
return "Password change Successfully"
def getPassbook(self):
if len(self.passbook) == 0:
return 'No transactions'
return '\n'.join(['%s : %s : %s' % (tx[0], tx[1], tx[2])
for tx in self.passbook])
def __str__(self):
return '%-15s %-20s %-15s %-10s' % (self.name, self.email,
self.password, self.type)
|
Global business contributes to a prosperous, diverse and dynamic world. Students earning a global business degree are prepared to tackle the complex international marketplace, working for Fortune 500 companies, mid-size companies and entrepreneurial enterprises with global reach.
At Kean, our global business students benefit from prestigious multinational business connections, overseas internships and an accessible and exceptional faculty. Students graduate with the real-world professional skills and an understanding of international trade that employers around the globe demand.
Students from Kean Global Business School gain professional experience through the Global Practica program, working with corporations around the world.
In the classroom and at internships around the world, you will become immersed in global business practices and international market strategy to prepare you for careers at fast-paced, multifaceted international corporations and organizations.
Global business majors get a full-time faculty member as an advisor upon admission so someone is there to guide and support you through the program and into your first job.
Your internship is a pathway to your career, a learning experience to bridge the gap between the classroom and the boardroom as you gain real-world international business experience at a multinational corporation or organization, both abroad and in the U.S.
Study at the state-of-the-art College of Business and Public Management at Wenzhou-Kean University for the same cost of studying at Kean USA.
An international consulting project is the capstone project for global business students. Companies throughout Europe, Asia and the Americas open their businesses to our students to solve real business issues in real time, giving our students experience in cross-cultural management as part of a global consulting team.
|
"""Unit test for treadmill.appcfg.abort
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
import io
import json
import os
import shutil
import tempfile
import unittest
import kazoo
import mock
import treadmill
from treadmill import appenv
from treadmill import context
from treadmill import fs
from treadmill.apptrace import events
from treadmill.appcfg import abort as app_abort
class AppCfgAbortTest(unittest.TestCase):
"""Tests for teadmill.appcfg.abort"""
def setUp(self):
self.root = tempfile.mkdtemp()
self.tm_env = appenv.AppEnvironment(root=self.root)
def tearDown(self):
if self.root and os.path.isdir(self.root):
shutil.rmtree(self.root)
@mock.patch('treadmill.appcfg.abort.flag_aborted', mock.Mock())
@mock.patch('treadmill.supervisor.control_service', mock.Mock())
def test_abort(self):
"""Tests abort sequence."""
container_dir = os.path.join(self.root, 'apps', 'proid.myapp#001',
'data')
fs.mkdir_safe(container_dir)
app_abort.abort(container_dir,
why=app_abort.AbortedReason.INVALID_TYPE,
payload='test')
treadmill.appcfg.abort.flag_aborted.assert_called_with(
container_dir,
app_abort.AbortedReason.INVALID_TYPE,
'test'
)
treadmill.supervisor.control_service.assert_called_with(
os.path.join(self.root, 'apps', 'proid.myapp#001'),
treadmill.supervisor.ServiceControlAction.kill
)
def test_flag_aborted(self):
"""Tests flag abort sequence."""
container_dir = os.path.join(self.root, 'apps', 'proid.myapp#001',
'data')
fs.mkdir_safe(container_dir)
app_abort.flag_aborted(container_dir,
why=app_abort.AbortedReason.INVALID_TYPE,
payload='test')
aborted_file = os.path.join(container_dir, 'aborted')
with io.open(aborted_file) as f:
aborted = json.load(f)
self.assertEqual('invalid_type', aborted.get('why'))
self.assertEqual('test', aborted.get('payload'))
@mock.patch('kazoo.client.KazooClient.exists', mock.Mock())
@mock.patch('kazoo.client.KazooClient.create', mock.Mock())
@mock.patch('kazoo.client.KazooClient.delete', mock.Mock())
@mock.patch('kazoo.client.KazooClient.get_children', mock.Mock())
@mock.patch('treadmill.appevents.post', mock.Mock())
@mock.patch('treadmill.sysinfo.hostname',
mock.Mock(return_value='xxx.xx.com'))
@mock.patch('treadmill.zkutils.connect', mock.Mock())
@mock.patch('treadmill.zkutils.put', mock.Mock())
def test_report_aborted(self):
"""Tests report abort sequence."""
context.GLOBAL.zk.url = 'zookeeper://xxx@hhh:123/treadmill/mycell'
treadmill.zkutils.connect.return_value = kazoo.client.KazooClient()
kazoo.client.KazooClient.get_children.return_value = []
kazoo.client.KazooClient.exists.return_value = True
kazoo.client.KazooClient.create.reset()
kazoo.client.KazooClient.delete.reset()
app_abort.report_aborted(self.tm_env, 'proid.myapp#001',
why=app_abort.AbortedReason.TICKETS,
payload='test')
treadmill.appevents.post.assert_called_with(
mock.ANY,
events.AbortedTraceEvent(
instanceid='proid.myapp#001',
why='tickets',
payload='test',
),
)
if __name__ == '__main__':
unittest.main()
|
Performs EEG (electroencephalogram) testing. May perform evoked potential/nerve conduction procedures.
Measures and applies electrodes accurately according to international 10-20 system.
Performs EEG procedures on all age groups without supervision.
Performs evoked potential/nerve conduction procedures with supervision as needed.
Performs tests with portable equipment in patient room when necessary.
Recognizes artifact and documents, eliminates, or takes proper measures to monitor the artifact.
Adapts procedures to particular clinical circumstances, i.e. appropriate control settings changes, use of extra electrodes, extra montages, and use of activation procedures.
Selects predetermined electrode combinations as well as special combinations made necessary by the case under study.
Calibrates and adjusts the EEG apparatus in accordance with standard procedures.
Abstracts relevant information from the patient, patient’s clinical record, or family member, including seizure types, last seizure, medications, and birth history for pediatric patients.
Explains procedures to patients; observes patient for any unusual reactions or events, listing all pertinent information on the record.
Prepares preliminary reports, placing them in the patient’s chart or addressed to the patient’s physician; completes final report for physician interpretation.
Notifies appropriate personnel of equipment malfunctions and repairs needed.
Helps patients get into and out of chairs, lifts patients, raises patients in bed, and turns the patient.
Checks equipment, supplies, and accessories on a regular basis; assists in inventory control and ordering supplies.
EDUCATION: High School diploma or graduate of an accredited electroencephalographic technology program or possess equivalent training. Associate’s Degree preferred.
EXPERIENCE: One (1) year of related experience.
LICENSURE OR CERTIFICATION: Eligible for or successful completion of part I written exam for EEG registration by the American Board of Registered Electroencephalographic Technologists (REEGT) preferred. Current Heartsaver CPR (HSCPR or HSFACPR).
Knowledge of neurodiagnostic studies best practices and measurement techniques.
Skill in conducting EEG and neurodiagnostic measurements.
|
# Copyright (C) 2007 Red Hat, Inc., Kent Lamb <klamb@redhat.com>
# Copyright (C) 2014 Red Hat, Inc., Bryn M. Reeves <bmr@redhat.com>
# Copyright (C) 2021 Red Hat, Inc., Mark Reynolds <mreynolds@redhat.com>
#
# This file is part of the sos project: https://github.com/sosreport/sos
#
# This copyrighted material is made available to anyone wishing to use,
# modify, copy, or redistribute it subject to the terms and conditions of
# version 2 of the GNU General Public License.
#
# See the LICENSE file in the source distribution for further information.
from sos.report.plugins import Plugin, RedHatPlugin
import os
class DirectoryServer(Plugin, RedHatPlugin):
short_desc = 'Directory Server'
plugin_name = 'ds'
profiles = ('identity',)
files = ('/etc/dirsrv', '/opt/redhat-ds')
packages = ('redhat-ds-base', 'redhat-ds-7')
def check_version(self):
if self.is_installed("redhat-ds-base") or \
os.path.exists("/etc/dirsrv"):
return "ds8"
elif self.is_installed("redhat-ds-7") or \
os.path.exists("/opt/redhat-ds"):
return "ds7"
return False
def setup(self):
self.add_forbidden_path([
"/etc/dirsrv/slapd*/pin.txt",
"/etc/dirsrv/slapd*/key3.db",
"/etc/dirsrv/slapd*/pwfile.txt",
"/etc/dirsrv/slapd*/*passw*",
"/etc/dirsrv/admin-serv/key[3-4].db",
"/etc/dirsrv/admin-serv/admpw",
"/etc/dirsrv/admin-serv/password.conf"
])
try:
for d in os.listdir("/etc/dirsrv"):
if d[0:5] == 'slapd':
certpath = os.path.join("/etc/dirsrv", d)
self.add_cmd_output("certutil -L -d %s" % certpath)
self.add_cmd_output("dsctl %s healthcheck" % d)
except OSError:
self._log_warn("could not list /etc/dirsrv")
if not self.check_version():
self.add_alert("Directory Server not found.")
elif "ds8" in self.check_version():
self.add_copy_spec([
"/etc/dirsrv/slapd*/cert8.db",
"/etc/dirsrv/slapd*/certmap.conf",
"/etc/dirsrv/slapd*/dse.ldif",
"/etc/dirsrv/slapd*/dse.ldif.startOK",
"/etc/dirsrv/slapd*/secmod.db",
"/etc/dirsrv/slapd*/schema/*.ldif",
"/etc/dirsrv/admin-serv",
"/var/log/dirsrv/*"
])
elif "ds7" in self.check_version():
self.add_copy_spec([
"/opt/redhat-ds/slapd-*/config",
"/opt/redhat-ds/slapd-*/logs"
])
self.add_cmd_output("ls -l /var/lib/dirsrv/slapd-*/db/*")
def postproc(self):
# Example for scrubbing rootpw hash
#
# nsslapd-rootpw: AAAAB3NzaC1yc2EAAAADAQABAAABAQDeXYA3juyPqaUuyfWV2HuIM
# v3gebb/5cvx9ehEAFF2yIKvsQN2EJGTV+hBM1DEOB4eyy/H11NqcNwm/2QsagDB3PVwYp
# 9VKN3BdhQjlhuoYKhLwgtYUMiGL8AX5g1qxjirIkTRJwjbXkSNuQaXig7wVjmvXnB2o7B
# zLtu99DiL1AizfVeZTYA+OVowYKYaXYljVmVKS+g3t29Obaom54ZLpfuoGMmyO64AJrWs
#
# to
#
# nsslapd-rootpw:********
regexppass = r"(nsslapd-rootpw(\s)*:(\s)*)(\S+)([\r\n]\s.*)*\n"
regexpkey = r"(nsSymmetricKey(\s)*::(\s)*)(\S+)([\r\n]\s.*)*\n"
repl = r"\1********\n"
self.do_path_regex_sub('/etc/dirsrv/*', regexppass, repl)
self.do_path_regex_sub('/etc/dirsrv/*', regexpkey, repl)
# vim: set et ts=4 sw=4 :
|
16 panels @ a tenner a panel, so £160 but you could over a discount if its one of your customers.
I’d only take that on if I was desperate, ive had a bad experience with difficult access rooftop it’s not auful but difficult enough to put me off, I’m sure plenty on her have the skills and know how to get it done easy but I hate difficult reaching jobs I try to find/ pick easy work. Having said that if they accepted my price of £250 I’d swallow my pride and grumble to the Mrs how hard my day was.
If it had been perspex then probably £60, being 6'5" a few steps up a step ladder would solve awkward panels. With it being glass then probably double if they were already a custy. Would pass on the job if they weren't.
That's a real pain that job since its glass and if bits keep coming out from the seals could take hours to rise off.
if it was a regular customer I say 120.
one off I aim for 140 to 150 depending on the stat of the glass and tbh if it's a one off I hope I don't get it.
20 panels of self-cleaning glass which is pain for rinsing down unless you have access to a outside tap I would want £150 up north, but i've sacked off add on's now thankfully.
Iv had bad experiences with glass roofs, they can catch you out and be a nightmare and since i dont really want or need the add on work i would be wanting £150 at least depending on how bad it looks.
I must be missing something about how long that roof would take to clean?
I would hope to have it done I ln two hours max.
That's why I'd quote higher than a plastic one. I would probably do that for £70-£80 if it was plastic.
When you say "we" does that mean there were 2 people so it took 4 man hours?
I have just bought a boss narrow scaffold which I would have used, looks really professional. But now that I have so much work (Rained off today, so going to price another job) I would charge £200. If I didn't get it then I would be happy. Too much work is actually worse than too little because all you end up doing is postponing the work and upsetting the customers.
Much would Sanderson charge for that? Inside and out, probably 2 grand. I have a 1450mm wide boss scaffold to sell, think its 8 metre tall tower. Give it to anyone on forum for £500 as I have a narrow one now.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.