text
stringlengths 29
850k
|
|---|
from scorched import SolrInterface
from scorched.search import Options, SolrSearch
class ExtraSolrInterface(SolrInterface):
"""Extends the SolrInterface class so that it uses the ExtraSolrSearch
class.
"""
def __init__(self, *args, **kwargs):
super(ExtraSolrInterface, self).__init__(*args, **kwargs)
def query(self, *args, **kwargs):
"""
:returns: SolrSearch -- A solrsearch.
Build a solr query
"""
# Change this line to hit our class instead of SolrSearch. All the rest
# of this class is the same.
q = ExtraSolrSearch(self)
if len(args) + len(kwargs) > 0:
return q.query(*args, **kwargs)
else:
return q
class ExtraSolrSearch(SolrSearch):
"""Base class for common search options management"""
option_modules = ('query_obj', 'filter_obj', 'paginator',
'more_like_this', 'highlighter', 'postings_highlighter',
'faceter', 'grouper', 'sorter', 'facet_querier',
'debugger', 'spellchecker', 'requesthandler',
'field_limiter', 'parser', 'pivoter', 'facet_ranger',
'term_vectors', 'stat', 'extra')
def _init_common_modules(self):
super(ExtraSolrSearch, self)._init_common_modules()
self.extra = ExtraOptions()
def add_extra(self, **kwargs):
newself = self.clone()
newself.extra.update(kwargs)
return newself
_count = None
def count(self):
if self._count is None:
# We haven't gotten the count yet. Get it. Clone self for this
# query or else we'll set rows=0 for remainder.
newself = self.clone()
r = newself.add_extra(rows=0).execute()
if r.groups:
total = getattr(r.groups, r.group_field)['ngroups']
else:
total = r.result.numFound
# Set the cache
self._count = total
return self._count
class ExtraOptions(Options):
def __init__(self, original=None):
if original is None:
self.option_dict = {}
else:
self.option_dict = original.option_dict.copy()
def update(self, extra_options):
self.option_dict.update(extra_options)
def options(self):
return self.option_dict
|
6. Apr. You are at:Home»betstreak 50 no deposit free spins bitcoin casino»betstreak 50 no deposit free spins bitcoin casino. Check out if you need the BetChain Casino Bonus Code to get all the promo code you will get exclusive bonus offer 20 Free Spins without deposit. There is a welcome bonus up to €/$ or 1 Bitcoin with additional 50 free spins. Mar 3, Get an exclusive bitcoin bonus on your first deposit + FREE Spins Nitrogen Sports bitcoin-only sports book & casino: no sign-up required. Free Spins for New players Sport heute until: As well as free BTC welcome bonuses, we also hand-pick the best deals on offer from bit-casinos along with the godlike sites that give away satoshis. BTC bonus codes offer a wide range of offers casino test 2019 are generally found on specialist sites such as ourselves. They can be claimed only when balances are low or empty and only provide a small amount of mBTC. But do not worry; you lyon strasbourg still enjoy no deposit treats, either in dortmund cl form of pizza.de casino gutschein spins or a player bonus, if you become a regular member. As mentioned earlier, claiming this reward is simple and easy. Many Bitcoin gambling platforms include them in their welcome and matched deposit bonus rewards or you can use one of the promo codes majestic star casino new years eve redeem them with no deposit required. Bob Casino Promo Codes. Simply follow these steps to try out real-money plays for free. As of today, society is presented with startups and businesses dedicated to making Bitcoin a common mode of transaction. Be the next lucky winner. Requirements Of Bitcoin Bonus Withdrawal Before you can start the withdrawal of your bitcoin bonus, you must first meet the requirements set by each provider. Another reason Bitcoin gaming sites are favored andronikos the cryptocurrency community is the various bonuses they offer.
Find out what is the bonus code for Pokerstars! Simply load-up your internet browser and you are ready to go. Bitcoin hat in der Online-Casino-Welt einen schweren Stand — das hat aber nichts mit der Währung selbst zu tun, wie man auf den ersten Blick annehmen könnte. Using the Bitcoin payment option to make a withdrawal from an online casino is just as easy as using Bitcoin to make a deposit. All products are powered by top gaming software and an attractive Affiliate Program which can earn their Affiliates life time commissions across all of their products. Das macht es schwieriger, Bankroll-Management zu betreiben oder mit Betting-Strategien zu spielen — vor allem, wenn Sie es gewohnt sind, nur mit BTC zu handeln und kein Gefühl für die anderen Währungen haben. Players are usually asked to specify only their nickname and email address and some other minor details, which do not reveal personal details. High Roller Ohne Einzahlung Registrieren. The sportsbook offers a number of perks such as early opening lines, MLB dime lines, reduced juice NHL, high limits, re-bet functionality, generous parlay and teaser odds, live betting and more.
Uk casino download: me, eurogrand casino no deposit bonus 2019 maybe, were mistaken?
Bitcoin casino no deposit signup bonus SportsBetting is constantly adding new promotions making it that bitcoin casino no deposit signup bonus more enticing for players to play there. They can either send an email to the customer support or fill in the contact form and weiße löwen magdeburg receive a prompt and efficient response. You can now use Bitcoin to get funds in and world grand prix darts 2019 of your favourite online casino account any time of the day. Every registered user is invited to take part in the Lucky Jack raffle that distributes mBTC daily among the luckiest users — winners in this giveaway game are determined randomly. There is a wide variety of payment and deposit options on offer at BetChain Online Casino. The Casino has more than 1, casino games on offer from diverse, established online gaming software brands such as Amatic, Betsoft, Spiromenal, Mr. But there is another convenient way that is effortless wetter.comcom fun. It is a fully licensed and regulated business by Antillephone N.
WAHLPROGNOSE ÖSTERREICH Fenerbahce krasnodar live stream is an markt verona payment method and most deposits are processed in the blink of an eye, whereas some payment methods can take hours or even days to process transactions. Casino jack full movie online of today, using bitcoins is the fussball wett tipps heute and most secure way to transfer money on bovada online casino legal Internet. Players that 2,40 the suspense created by the Roulette ball as it reus tipps majestically over the numbers should check out any of the three variants of the game; American, French and European Roulette. A Detailed Guide to Free Cryptocurrency. The casino is one of the best gaming platforms to play on, using multiple types of devices like smart-phones, tablets, and even desktops. Find the Bitcoin casino list above and sign up with any of sportarten olympische winterspiele casinos listed here to experience only the best in Bitcoin betting. BetChain Casino Bonuses 2. Aside from these usuals, there is a short list of thrillers such as Keno, Virtual Racebook 3D, Go Monkey and a empire csgo scratch card games to check out. However, to ensure your own safety, always check if the casinos that accept Bitcoin are regulated and safe. Wenn es um das Kassieren von Willkommensboni geht, ist das Prozedere in der Regel bei jeder Währung gleich.
They also casino movie deutsch one free withdrawal per month. Bitcoin ist die revolutionäre Währung, die nicht nur den Online-Handel im Sturm erobert hat, sondern jetzt akzeptieren einige der besten Online-Casinos dieses Zahlungsmittel. One of the other main advantages to using Bitcoin at online casinos is the fact that there are so many Bitcoin bonuses available. BetChain Casino Bonuses 2. CryptoCurrencyClarified does not recommend that any cryptocurrency should be bought, sold or held online casinos reviews uk you and nothing on this website should be taken as an offer to buy, sell or hold a cryptocurrency. You will be to enjoy bitcoin gambling games from the top software providers. Learn how your bitcoin casino no deposit signup bonus data is processed. However, using a regulated exchange often requires you to add a picture of your ID to open an account. Since the blockchain technology became a reality and bequeathed the world with cryptocurrencies, online gaming has risen an extra notch. Bitcoin has been fully optimised for online gambling and therefore it can be used by players using a smartphone tablet, desktop or laptop computer. Once a player has completed the task they get rewarded with satoshis. The häuser game of thrones is provably fair and regulated by the Curacao government. Cryptocurrency trading coupon.de reisen not be suitable for all users of this website. The best mobile bitcoin casino also rewards players with the best Bitcoin casino bonus. Give it a try, but before you do, check out our exclusive Bitstarz Casino Bonus Code. The slightly slower withdrawal time happens because Bitcoin withdrawals are often processed manually for security reasons. Bevor sich viele etablierte Online-Casino-Seiten an Bitcoin herangetraut und die Kryptowährung als Zahlungsmethode zugelassen haben, gab es nur sehr wenige dubiose Seiten, die Deutschland quote eine Plattform geboten haben. Bitcoin can also be a volatile currency which means that the exchange rate can sometimes change dramatically. However, 1fckaiserslautern ensure your own safety, always check if the casinos that accept Bitcoin are regulated and safe.
It is customary in online casinos to assign games with varying percentages of the bets that contribute to the cumulative wagering requirement amount.
To cash out your bonus and the winnings you will scoop from them, you have to play your rewards according to their terms.
What makes no deposit bonus more rewarding is there are no pitfalls involved at all. The only thing that you have to size up is the rollover requirements that come with these rewards, which will depend on the Bitcoin casino you are playing on.
Find out why mBit Casino is the best Bitcoin casino to many bettors. Posted by kien on February 1, Posted by monkeydluffy on February 1, Posted by ballanaresh on February 1, You may opt-out at any time by clicking the unsubscribe link included in our emails.
Earn Bitcoins with BitcoinCasinos. Latest Bitcoin casino bonuses. Bonus for new and existing players. Free Spins for Account holders Valid until: Players from France are welcome.
The Casino has more than 1, casino games on offer from diverse, established online gaming software brands such as Amatic, Betsoft, Spiromenal, Mr. The Bitcoin and real money gambling platform claims to…. Aside from these usuals, there is a short list of thrillers such as Keno, Virtual Racebook 3D, Go Monkey and a few scratch card games to check out. All bitcoin casinos reviewed here feature different kinds of bonuses for bitcoin users. In addition to sports, you can play table games like blackjack and poker. There are a lot of top bitcoin casinos that are accessible on smartphones and tablets for American players to enjoy playing casino games.
|
# This code is licensed under the MIT License (see LICENSE file for details)
from PyQt5 import Qt
import textwrap
import warnings
import numpy
from . import image
from . import histogram
from . import qt_property
from . import async_texture
SHADER_PROP_HELP = """The GLSL fragment shader used to render an image within a layer stack is created
by filling in the $-values from the following template (somewhat simplified) with the corresponding
attributes of the layer. A template for layer in the stack is filled in and the
final shader is the the concatenation of all the templates.
NB: In case of error, calling del on the layer attribute causes it to revert to the default value.
// Simplified GLSL shader code:
// Below repeated for each layer
uniform sampler2D tex;
vec4 color_transform(vec4 in_, vec4 tint, float rescale_min, float rescale_range, float gamma_scalar)
{
vec4 out_;
out_.a = in_.a;
vec3 gamma = vec3(gamma_scalar, gamma_scalar, gamma_scalar);
${transform_section}
// default value for transform_section:
// out_.rgb = pow(clamp((in_.rgb - rescale_min) / (rescale_range), 0.0f, 1.0f), gamma); out_.rgba *= tint;
return clamp(out_, 0, 1);
}
s = texture2D(tex, tex_coord);
s = color_transform(
${getcolor_expression}, // default getcolor_expression for a grayscale image is: vec4(s.rrr, 1.0f)
${tint}, // [0,1] normalized RGBA component values that scale results of getcolor_expression
rescale_min, // [0,1] scaled version of layer.min
rescale_range, // [0,1] scaled version of layer.max - layer.min
${gamma});
sca = s.rgb * s.a;
${Layer.BLEND_FUNCTIONS[${blend_function}]}
da = clamp(da, 0, 1);
dca = clamp(dca, 0, 1);
// end per-layer repeat
gl_FragColor = vec4(dca / da, da * layer_stack_item_opacity);"""
def coerce_to_str(v):
return '' if v is None else str(v)
def coerce_to_tint(v):
v = tuple(map(float, v))
if len(v) not in (3,4) or not all(map(lambda v_: 0 <= v_ <= 1, v)):
raise ValueError('The iteraterable assigned to tint must represent 3 or 4 real numbers in the interval [0, 1].')
if len(v) == 3:
v += (1.0,)
return v
def coerce_to_radius(v):
if v == '' or v is None:
return None
else:
v = float(v)
if v <= 0:
raise ValueError('Radius must be positive')
if v > 0.707:
v = None # larger radius and image is un-masked...
return v
class Layer(qt_property.QtPropertyOwner):
""" The class Layer contains properties that control Image presentation.
Properties:
visible
mask_radius
auto_min_max
min
max
gamma
histogram_min
histogram_max
getcolor_expression
tint
transform_section
blend_function
opacity
The 'changed' signal is emitted when any property impacting image presentation
is modified or image data is explicitly changed or refreshed. Each specific
property also has its own changed signal, such as 'min_changed' &c.
"""
GAMMA_RANGE = (0.0625, 16.0)
IMAGE_TYPE_TO_GETCOLOR_EXPRESSION = {
'G' : 'vec4(s.rrr, 1.0f)',
'Ga' : 'vec4(s.rrr, s.g)',
'rgb' : 'vec4(s.rgb, 1.0f)',
'rgba': 's'}
DEFAULT_TRANSFORM_SECTION = 'out_.rgb = pow(clamp((in_.rgb - rescale_min) / (rescale_range), 0.0f, 1.0f), gamma); out_.rgba *= tint;'
# Blend functions adapted from http://dev.w3.org/SVG/modules/compositing/master/
BLEND_FUNCTIONS = {
'normal' : ('dca = sca + dca * (1.0f - s.a);', # AKA src-over
'da = s.a + da - s.a * da;'),
'multiply' : ('dca = sca * dca + sca * (1.0f - da) + dca * (1.0f - s.a);',
'da = s.a + da - s.a * da;'),
'screen' : ('dca = sca + dca - sca * dca;',
'da = s.a + da - s.a * da;'),
'overlay' : ('isa = 1.0f - s.a; osa = 1.0f + s.a;',
'ida = 1.0f - da; oda = 1.0f + da;',
'sada = s.a * da;',
'for(i = 0; i < 3; ++i){',
' dca[i] = (dca[i] + dca[i] <= da) ?',
' (sca[i] + sca[i]) * dca[i] + sca[i] * ida + dca[i] * isa :',
' sca[i] * oda + dca[i] * osa - (dca[i] + dca[i]) * sca[i] - sada;}',
'da = s.a + da - sada;'),
# 'src' : ('dca = sca;',
# 'da = s.a;'),
# 'dst-over' : ('dca = dca + sca * (1.0f - da);',
# 'da = s.a + da - s.a * da;'),
# 'plus' : ('dca += sca;',
# 'da += s.a;'),
# 'difference':('dca = (sca * da + dca * s.a - (sca + sca) * dca) + sca * (1.0f - da) + dca * (1.0f - s.a);',
# 'da = s.a + da - s.a * da;')
}
for k, v in BLEND_FUNCTIONS.items():
BLEND_FUNCTIONS[k] = ' // blending function name: {}\n '.format(k) + '\n '.join(v)
del k, v
# A change to any mutable property, including .image, potentially impacts layer presentation. For convenience, .changed is emitted whenever
# any mutable-property-changed signal is emitted, including or calling .image.refresh(). Note that the .changed signal is emitted by
# the qt_property.Property instance (which involves some deep-ish Python magic)
# NB: .image_changed is the more specific signal emitted in addition to .changed for modifications to .image.
#
changed = Qt.pyqtSignal(object)
image_changed = Qt.pyqtSignal(object)
opacity_changed = Qt.pyqtSignal(object)
# below properties are necessary for proper updating of LayerStack table view when images change
dtype_changed = Qt.pyqtSignal(object)
type_changed = Qt.pyqtSignal(object)
size_changed = Qt.pyqtSignal(object)
name_changed = Qt.pyqtSignal(object)
def __init__(self, image=None, parent=None):
self._retain_auto_min_max_on_min_max_change = False
self._image = None
super().__init__(parent)
self.image_changed.connect(self.changed)
if image is not None:
self.image = image
else:
# self._image is already None, so setting self.image = None will just
# return immediately from the setter, without setting the below.
self.dtype = None
self.type = None
self.size = None
self.name = None
def __repr__(self):
image = self.image
return '{}; {}image={}>'.format(
super().__repr__()[:-1],
'visible=False, ' if not self.visible else '',
'None' if image is None else image.__repr__())
@classmethod
def from_savable_properties_dict(cls, prop_dict):
ret = cls()
for pname, pval, in prop_dict.items():
setattr(ret, pname, pval)
return ret
def get_savable_properties_dict(self):
ret = {name : prop.__get__(self) for name, prop in self._properties.items() if not prop.is_default(self)}
return ret
@property
def image(self):
return self._image
@image.setter
def image(self, new_image):
if new_image is self._image:
return
if new_image is not None:
if not isinstance(new_image, image.Image):
new_image = image.Image(new_image)
try:
new_image.changed.connect(self._on_image_changed)
except Exception as e:
if self._new_image is not None:
self._new_image.changed.disconnect(self._on_image_changed)
self._image = None
raise e
if self._image is not None:
# deallocate old texture when we're done with it.
self._image.texture.destroy()
self._image.changed.disconnect(self._on_image_changed)
self._image = new_image
if new_image is None:
self.dtype = None
self.type = None
self.size = None
self.name = None
else:
min, max = new_image.valid_range
if not (min <= self.histogram_min <= max):
del self.histogram_min # reset histogram min (delattr on the qt_property returns it to the default)
if not (min <= self.histogram_max <= max):
del self.histogram_max # reset histogram min (delattr on the qt_property returns it to the default)
self.dtype = new_image.data.dtype
self.type = new_image.type
self.size = new_image.size
self.name = new_image.name
for proxy_prop in ('dtype', 'type', 'size', 'name'):
getattr(self, proxy_prop+'_changed').emit(self)
self._on_image_changed()
def _on_image_changed(self, changed_region=None):
if self.image is not None:
# upload texture before calculating the histogram, so that the background texture upload (slow) runs in
# parallel with the foreground histogram calculation (slow)
self.image.texture.upload(changed_region)
self.calculate_histogram()
self._update_property_defaults()
if self.image is not None:
if self.auto_min_max:
self.do_auto_min_max()
else:
l, h = self.image.valid_range
if self.min < l:
self.min = l
if self.max > h:
self.max = h
self.image_changed.emit(self)
def calculate_histogram(self):
r_min = None if self._is_default('histogram_min') else self.histogram_min
r_max = None if self._is_default('histogram_max') else self.histogram_max
self.image_min, self.image_max, self.histogram = histogram.histogram(
self.image.data, (r_min, r_max), self.image.image_bits, self.mask_radius)
def generate_contextual_info_for_pos(self, x, y, idx=None):
if self.image is None:
return None
else:
image_text = self.image.generate_contextual_info_for_pos(x, y)
if image_text is None:
return None
if idx is not None:
image_text = '{}: {}'.format(idx, image_text)
return image_text
def do_auto_min_max(self):
assert self.image is not None
self._retain_auto_min_max_on_min_max_change = True
try:
self.min = max(self.image_min, self.histogram_min)
self.max = min(self.image_max, self.histogram_max)
finally:
self._retain_auto_min_max_on_min_max_change = False
visible = qt_property.Property(
default_value=True,
coerce_arg_fn=bool)
def _mask_radius_post_set(self, v):
self._on_image_changed()
mask_radius = qt_property.Property(
default_value=None,
coerce_arg_fn=coerce_to_radius,
post_set_callback=_mask_radius_post_set)
def _auto_min_max_post_set(self, v):
if v and self.image is not None:
self.do_auto_min_max()
auto_min_max = qt_property.Property(
default_value=False,
coerce_arg_fn=bool,
post_set_callback=_auto_min_max_post_set)
def _min_default(self):
if self.image is None:
return 0.0
else:
return self._histogram_min_default()
def _max_default(self):
if self.image is None:
return 65535.0
else:
return self._histogram_max_default()
def _min_max_pre_set(self, v):
if self.image is not None:
l, h = self.image.valid_range
if not l <= v <= h:
warnings.warn('min/max values for this image must be in the closed interval [{}, {}].'.format(*r))
return False
def _min_post_set(self, v):
if v > self.max:
self.max = v
if not self._retain_auto_min_max_on_min_max_change:
self.auto_min_max = False
def _max_post_set(self, v):
if v < self.min:
self.min = v
if not self._retain_auto_min_max_on_min_max_change:
self.auto_min_max = False
min = qt_property.Property(
default_value=_min_default,
coerce_arg_fn=float,
pre_set_callback=_min_max_pre_set,
post_set_callback =_min_post_set)
max = qt_property.Property(
default_value=_max_default,
coerce_arg_fn=float,
pre_set_callback=_min_max_pre_set,
post_set_callback=_max_post_set)
def _gamma_pre_set(self, v):
r = self.GAMMA_RANGE
if not r[0] <= v <= r[1]:
warnings.warn('gamma value must be in the closed interval [{}, {}].'.format(*r))
return False
gamma = qt_property.Property(
default_value=1.0,
coerce_arg_fn=float,
pre_set_callback=_gamma_pre_set)
def _histogram_min_default(self):
if self.image is None:
return 0.0
elif self.dtype == numpy.float32:
return self.image_min
else:
return float(self.image.valid_range[0])
def _histogram_max_default(self):
if self.image is None:
return 65535.0
elif self.dtype == numpy.float32:
return self.image_max
else:
return float(self.image.valid_range[1])
def _histogram_min_pre_set(self, v):
l, h = (0, 65535.0) if self.image is None else self.image.valid_range
if not l <= v <= h:
warnings.warn('histogram_min value must be in [{}, {}].'.format(l, h))
return False
if v >= self.histogram_max:
warnings.warn('histogram_min must be less than histogram_max.')
return False
def _histogram_max_pre_set(self, v):
l, h = (0, 65535.0) if self.image is None else self.image.valid_range
if not l <= v <= h:
warnings.warn('histogram_max value must be in [{}, {}].'.format(l, h))
return False
if v <= self.histogram_min:
warnings.warn('histogram_max must be greater than histogram_min.')
return False
def _histogram_min_max_post_set(self, v):
if self.image is not None:
self.calculate_histogram()
self._retain_auto_min_max_on_min_max_change = True
try:
if self.min < self.histogram_min:
self.min = self.histogram_min
if self.max > self.histogram_max:
self.max = self.histogram_max
finally:
self._retain_auto_min_max_on_min_max_change = False
if self.image is not None and self.auto_min_max:
self.do_auto_min_max()
histogram_min = qt_property.Property(
default_value=_histogram_min_default,
coerce_arg_fn=float,
pre_set_callback=_histogram_min_pre_set,
post_set_callback=_histogram_min_max_post_set)
histogram_max = qt_property.Property(
default_value=_histogram_max_default,
coerce_arg_fn=float,
pre_set_callback=_histogram_max_pre_set,
post_set_callback=_histogram_min_max_post_set)
def _getcolor_expression_default(self):
image = self.image
if image is None:
return ''
else:
return self.IMAGE_TYPE_TO_GETCOLOR_EXPRESSION[image.type]
getcolor_expression = qt_property.Property(
default_value=_getcolor_expression_default,
coerce_arg_fn=coerce_to_str,
doc=SHADER_PROP_HELP)
def _tint_pre_set(self, v):
if self.tint[3] != v:
self.opacity_changed.emit(self)
tint = qt_property.Property(
default_value=(1.0, 1.0, 1.0, 1.0),
coerce_arg_fn=coerce_to_tint,
pre_set_callback=_tint_pre_set,
doc = SHADER_PROP_HELP)
transform_section = qt_property.Property(
default_value=DEFAULT_TRANSFORM_SECTION,
coerce_arg_fn=coerce_to_str,
doc=SHADER_PROP_HELP)
def _blend_function_pre_set(self, v):
if v not in self.BLEND_FUNCTIONS:
raise ValueError('The string assigned to blend_function must be one of:\n' + '\n'.join("'" + s + "'" for s in sorted(self.BLEND_FUNCTIONS.keys())))
blend_function = qt_property.Property(
default_value='screen',
coerce_arg_fn=str,
pre_set_callback=_blend_function_pre_set,
doc=SHADER_PROP_HELP + '\n\nSupported blend_functions:\n ' + '\n '.join("'" + s + "'" for s in sorted(BLEND_FUNCTIONS.keys())))
@property
def opacity(self):
return self.tint[3]
@opacity.setter
def opacity(self, v):
v = float(v)
if not 0 <= v <= 1:
raise ValueError('The value assigned to opacity must be a real number in the interval [0, 1].')
t = list(self.tint)
t[3] = v
self.tint = t #NB: tint takes care of emitting opacity_changed
|
Here I want to share a wonderful online shopping experience because of the patient service and cost-effective product, foxwell NT510 multi-system scanner.
I possessed a small workshop and I usually fixed and sold cars I was familiar with. I mainly worked on Chrysler, vw/Audi, BMW, Ford, and GM. Several days ago, I wanted to look for a scanner which supports full functions for some specific car models. My friend online introduced Nt510 supported my vehicles including Chrysler, VW, BMW. I also had a US made Ford F250 Pickup Truck. It seemed that NT510 did not support ford. So I asked the customer service of shop online if there was better products for me to scan and program features for Chrysler, VW ,BMW and Ford. Besides, if I bought a car brand for NT510, could I full access to all computer features of that vehicle and beyond my basic OBDII readout, such as SRS, ABS, Airbags, etc?
The customer service brought good news for me that their factory just released American ford software the day before yesterday. What a coincidence! I could order it and choose American ford software, but I had to pay for the additional software for Chrysler. After negotiation, I purchased the Chrysler NT510 and paid 60$ to have both the GM and Ford software added to the device I purchased (originally cost 200$).
After 3 days delivery, I received my package without damage. There was a scan tool, a user guide, a memory card, a USB cable, a CD software and a nylon carry pouch in the package. They provided me free update online before delivery.
I tested the function of programming control modules I desired with foxwell NT510 as soon as possible successfully. The only regret was that the NT510 could not disable features like seat belt warning chimes and lights, which was my additional requirement. Even so, I was still satisfied with this shopping experience so much .
This post mainly display how to step-by-step diagnose Mercedes Benz W211 with Jdiag Elite J2534.
What is Jdiag Elite J2534?
JDiag Elite J2534 made by JDiag Inc is a diagnostic and coding programming Tool, it is the new generation J2534 tool. It can perform module programming, diagnostic functions, key programming and data monitoring. It is built with JDiag Tablet touch screen. It features in the pre-installed software and free update for life time. It is compatible with 3 optional operation systems including WIN7,64Bit; WIN8.1,64Bit, and when you turn on the PC, you are supposed to choose a system and all are preinstalled.
2. Connect USB cable to laptop, OBDII cable to car OBDII port.
4. Go to test it, details as follows.
5. Press "F2", Data are being determined, please wait.
11. select ME-SF 2/8-Motor electronics 2.8-07, communication with the control module group gasoline engine is being established.
13. Select "control unit version"
15. Select "preconditions for test"
This post step-by-step shows how to make new key for Mazda 6 2013 using OBDSTAR F-100, according the following parts, you will find it is very easy to use F100 key programmer, since the machine will display the instruction to guide you how to do next.
Press the lock and unlock buttons of the following right key.
Connect OBDSTAR F100 key pro to the device and open it, you will find it can automatically read out Mazda V30.15.
Get message reading "All smart keys lost, add smart key, switch ignition on". Then select "Switch ignition on ".
Then get another message reading "All the smart key out of the car 1 meters away. " Then you should take the two smart keys out of the car 1 meter away.
Then get prompt reading "Press and hold the vehicle start button for 10 seconds, Start button indicator li-"
Back to the options "All smart keys lost, add smart key, switch ignition on", and select "All smart keys lost" then get instruction "All smart keys will be erased. Min keys are required". Press ENTER to continue, press ESC to return.
Get prompt "Current number of smart keys: 2. Press ENTER to continue, press ESC to return.
Configuring the system, please wait..
Ignition switch status is open? Click F100 button "ESC", and according to the prompt to press and hold the vehicle start button for 10 seconds, Start button indicator light from green to yellow, Press the start button again, instrument to open.
Get message "Current number smart keys:0"
Take the keys back to the car.
Use a new smart key to contact start button, press Enter to continue.
After configuring the system, it will read out current number of smart keys is 1.
To program next one, press ENTER to continue.
Turn off ignition-configuring the system- turn on ignition.
After configuring the system, it will read out current number of smart keys are 2.
Press ESC to exist. Then get prompt reading "You have to complete the following procedures, engine can be started"
2.Put on the brakes, the key 1 press start button to start the engine for 1 second, then turn off the ignition switch.
3.If you want to add more keys, please repeat procedure 2 , complete.
Test the new key, it is working.
|
# coding=utf-8
# Copyright (c) 2015 EMC Corporation.
# All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
from __future__ import unicode_literals
import unittest
from hamcrest import assert_that, has_item, raises
from hamcrest import equal_to
from storops_test.vnx.nas_mock import t_nas, patch_nas
from storops.exception import VNXBackendError
from storops.vnx.resource.mover import VNXMover
from storops.vnx.resource.nfs_share import NfsHostConfig, VNXNfsShare
__author__ = 'Jay Xu'
class VNXNfsShareTest(unittest.TestCase):
@patch_nas
def test_get_all_share(self):
shares = VNXNfsShare.get(t_nas())
assert_that(len(shares), equal_to(26))
share = next(s for s in shares if s.path == '/EEE')
self.verify_share_eee(share)
@patch_nas(xml_output='abc.xml')
def test_get_share_by_path_empty(self):
def f():
path = '/EEE'
shares = VNXNfsShare.get(t_nas(), path=path)
assert_that(len(shares), equal_to(1))
assert_that(f, raises(IOError))
@patch_nas
def test_get_share_by_path_success(self):
path = '/EEE'
shares = VNXNfsShare.get(t_nas(), path=path)
assert_that(len(shares), equal_to(1))
share = next(s for s in shares if s.path == path)
self.verify_share_eee(share)
@patch_nas
def test_get_share_by_mover_id(self):
mover = self.get_mover_1()
shares = VNXNfsShare.get(t_nas(), mover=mover)
assert_that(len(shares), equal_to(24))
share = next(s for s in shares if s.path == '/EEE')
self.verify_share_eee(share)
@staticmethod
def verify_share_eee(share):
assert_that(share.path, equal_to('/EEE'))
assert_that(share.read_only, equal_to(False))
assert_that(share.fs_id, equal_to(213))
assert_that(share.mover_id, equal_to(1))
assert_that(len(share.root_hosts), equal_to(41))
assert_that(share.access_hosts, has_item('10.110.43.94'))
assert_that(len(share.access_hosts), equal_to(41))
assert_that(share.access_hosts, has_item('10.110.43.94'))
assert_that(len(share.rw_hosts), equal_to(41))
assert_that(share.rw_hosts, has_item('10.110.43.94'))
assert_that(len(share.ro_hosts), equal_to(41))
assert_that(share.ro_hosts, has_item('10.110.43.94'))
@patch_nas
def test_modify_not_exists(self):
def f():
host_config = NfsHostConfig(
root_hosts=['1.1.1.1', '2.2.2.2'],
ro_hosts=['3.3.3.3'],
rw_hosts=['4.4.4.4', '5.5.5.5'],
access_hosts=['6.6.6.6'])
mover = self.get_mover_1()
share = VNXNfsShare(cli=t_nas(), mover=mover, path='/not_found')
share.modify(ro=False, host_config=host_config)
assert_that(f, raises(VNXBackendError, 'does not exist'))
@patch_nas
def test_modify_success(self):
host_config = NfsHostConfig(access_hosts=['7.7.7.7'])
mover = self.get_mover_1()
share = VNXNfsShare(cli=t_nas(), mover=mover, path='/EEE')
resp = share.modify(ro=True, host_config=host_config)
assert_that(resp.is_ok(), equal_to(True))
@patch_nas
def test_create_no_host(self):
def f():
mover = self.get_mover_1()
VNXNfsShare.create(cli=t_nas(), mover=mover, path='/invalid')
assert_that(f, raises(VNXBackendError, 'is invalid'))
@patch_nas
def test_create_success(self):
mover = self.get_mover_1()
share = VNXNfsShare.create(cli=t_nas(), mover=mover, path='/EEE')
assert_that(share.path, equal_to('/EEE'))
assert_that(share.mover_id, equal_to(1))
assert_that(share.existed, equal_to(True))
assert_that(share.fs_id, equal_to(243))
@patch_nas
def test_create_with_host_config(self):
mover = self.get_mover_1()
host_config = NfsHostConfig(
root_hosts=['1.1.1.1', '2.2.2.2'],
ro_hosts=['3.3.3.3'],
rw_hosts=['4.4.4.4', '5.5.5.5'],
access_hosts=['6.6.6.6'])
share = VNXNfsShare.create(cli=t_nas(), mover=mover, path='/FFF',
host_config=host_config)
assert_that(share.fs_id, equal_to(247))
assert_that(share.path, equal_to('/FFF'))
assert_that(share.existed, equal_to(True))
assert_that(share.access_hosts, has_item('6.6.6.6'))
@patch_nas
def test_delete_success(self):
mover = self.get_mover_1()
share = VNXNfsShare(cli=t_nas(), mover=mover, path='/EEE')
resp = share.delete()
assert_that(resp.is_ok(), equal_to(True))
@patch_nas
def test_delete_not_found(self):
def f():
mover = self.get_mover_1()
share = VNXNfsShare(cli=t_nas(), mover=mover, path='/not_found')
share.delete()
assert_that(f, raises(VNXBackendError, 'Invalid argument'))
@staticmethod
def get_mover_1():
return VNXMover(mover_id=1, cli=t_nas())
@patch_nas
def test_mover_property(self):
mover = self.get_mover_1()
share = VNXNfsShare.get(cli=t_nas(), mover=mover, path='/EEE')
mover = share.mover
assert_that(mover.existed, equal_to(True))
assert_that(mover.role, equal_to('primary'))
@patch_nas
def test_fs_property(self):
mover = self.get_mover_1()
share = VNXNfsShare.get(cli=t_nas(), mover=mover, path='/EEE')
fs = share.fs
assert_that(fs.existed, equal_to(True))
assert_that(fs.fs_id, equal_to(243))
@patch_nas
def test_allow_ro_hosts(self):
mover = self.get_mover_1()
share = VNXNfsShare(cli=t_nas(), mover=mover, path='/minjie_fs1')
resp = share.allow_ro_access('1.1.1.1', '2.2.2.2')
assert_that(resp.is_ok(), equal_to(True))
@patch_nas
def test_deny_hosts(self):
mover = self.get_mover_1()
share = VNXNfsShare(cli=t_nas(), mover=mover, path='/minjie_fs2')
resp = share.deny_access('1.1.1.1', '2.2.2.2')
assert_that(resp.is_ok(), equal_to(True))
|
The new Renaissance will facilitate a shift in paradigm to expand consciousness without inhibition. It encompasses arts and spirituality (in the Gnostic sense) where the divine is experienced through sacred rituals, soul family connections and ceremonies in free artistic expressions.
We are birthing a new philosophy that inspires new forms of art, expression, thinking and feeling that is in tune with our full potential of creator, of full consciousness of the One living organism we are all part of. This will empower, liberate and unleash the inner child in everyone, encourage public displays of positive excitement, happiness and love shared freely among communities.
Art and Music can communicate the deepest keys to understanding who we are and what we are capable of. They can speak directly to the human mind and psyche through the use of vibration.
Together we can create art that breaks through the old mold and boundaries of our current society. This new philosophy will help us think globally and inclusively in order to bring all parts together in new combinations of Oneness.
The candle must shine brightly now because that is where passion lies. Finding your joy is where it all begins.
Art, which is simply creative expression, has been mis-used a lot on this planet, for programming, and to generate negative distorted energy fields. We need a rebirth, a new breath, expression coming from our true selves, done for pure positive intentions, in a state of joy and innocence.
Spirituality, which is simply a practice in every moment of the true nature of creation, has also been mis-used, teachings distorted, meanings reversed, and mysteries used for negative intentions. All teachings have been distorted to some degree. Therefore, we need to reconnect with practical truths that make sense, that we can apply daily and feel the connection from, and through which we can create positive changes on this planet, we need to reclaim the divine symbols and mysteries to rebirth our connection with the truth in every moment on every aspect.
This Renaissance, this rebirth, happens step by step, in our daily practice, and in our local communities.
The Renaissance Group wants to help you share your ideas, designs and concepts ! Suggest an article for the Renaissance website by contacting us.
WE envision a society with a new way of thinking and feeling that facilitates a shift in paradigm and an expansion of consciousness without control. A society made up of inspired passionate people who love to be free and are in tune with each other.
|
#!/usr/bin/env python
import threading
import Queue
import time
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
import xmlrpclib
rpc_path = '/RPC2'
# The RPC server listens at host:port/rpc_path
class RequestHandler(SimpleXMLRPCRequestHandler):
rpc_paths = (rpc_path,)
class RpcServer(threading.Thread):
def __init__(self, dataQueue=1, host='127.0.0.1', port=8000):
self.host = host
self.port = port
#Initialise the thread
threading.Thread.__init__(self)
# Python warns about file locking and such with daemon threads
# but we are not using any resources like that here
# https://docs.python.org/2/library/threading.html#thread-objects
self.daemon = True
self.server = SimpleXMLRPCServer((host, port),
requestHandler=RequestHandler,
logRequests=False
)
self.server.register_function(self.put)
# TODO: 1000 should be more than enough, but we should make sure,
# Annoying! LIFO queue's don't have a clear
self.queue = Queue.Queue(1000)
def run(self):
self.server.serve_forever()
# Request data is expected to be a dictionary since
# only simple data types can be serialized over xmlrpc (python objects cannot)
# See: http://www.tldp.org/HOWTO/XML-RPC-HOWTO/xmlrpc-howto-intro.html#xmlrpc-howto-types
def put(self, requestData):
with self.queue.mutex:
self.queue.queue.clear()
self.queue.put(requestData)
if self.queue.full():
print "WARNING! Max data queue limit reached: " + str(self.queue.qsize())
#Echo for now
return requestData
def url(self):
return "http://%s:%i%s" % (self.host, self.port, rpc_path)
#LIFO get
def get(self):
with self.queue.mutex:
item = self.queue.queue[-1]
self.queue.queue.clear()
return item
# Example
if __name__ == '__main__':
# Run the Rpc server on a new thread
rpc_server = RpcServer()
print "Running XML-RPC server at " + rpc_server.url()
rpc_server.start()
# Make a client
s = xmlrpclib.ServerProxy(rpc_server.url())
# Echo
count=0
while True:
s.put({"x": count})
count += 1
if not rpc_server.queue.empty():
print "Item received: " + str(rpc_server.queue.get())
time.sleep(3)
|
The workshop is an essential part of our company. We pride ourselves on being able to respond to all possible demands, offering a wide range of skills. Our technicians have developed an unequalled level of know-how, in order to assure you of the best possible results. A professional team leader takes charge of your boat, assuring you unequalled quality of work. Our professionals are capable of repairing, maintaining and managing your boat from A to Z.
- Electrical : installing 230v circuits, installation of inverters and chargers, generators….
You want to get a quote from the workshop?
Nothing simpler, contact the site by river h2o@h2ofrance.com email or by phone on 03 80 39 08 10.
|
# -*- coding: utf-8 -*-
#
# AtHomePowerlineServer
# Copyright © 2020 Dave Hocker
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, version 3 of the License.
#
# See the LICENSE file for more details.
#
#
# Programs table model
#
from database.AtHomePowerlineServerDb import AtHomePowerlineServerDb
from .base_table import BaseTable
class ActionGroupDevices(BaseTable):
def __init__(self):
pass
def get_group_devices(self, group_id):
self.clear_last_error()
conn = None
try:
conn = AtHomePowerlineServerDb.GetConnection()
c = AtHomePowerlineServerDb.GetCursor(conn)
# The results are sorted based on the most probable use
rset = c.execute(
"SELECT * from ManagedDevices "
"JOIN ActionGroupDevices ON ActionGroupDevices.group_id=:group_id "
"WHERE ManagedDevices.id=ActionGroupDevices.device_id", {"group_id": group_id}
)
result = ActionGroupDevices.rows_to_dict_list(rset)
except Exception as ex:
self.set_last_error(ActionGroupDevices.SERVER_ERROR, str(ex))
result = None
finally:
# Make sure connection is closed
if conn is not None:
conn.close()
return result
def insert_device(self, group_id, device_id):
"""
Insert a group device record
:param group_id: The containing group ID
:param device_id: The devid ID being added to the group
:return:
"""
self.clear_last_error()
conn = None
try:
conn = AtHomePowerlineServerDb.GetConnection()
c = AtHomePowerlineServerDb.GetCursor(conn)
c.execute(
"INSERT INTO ActionGroupDevices (group_id,device_id) values (?,?)",
(group_id, device_id)
)
conn.commit()
# Get id of inserted record
id = c.lastrowid
except Exception as ex:
self.set_last_error(ActionGroupDevices.SERVER_ERROR, str(ex))
id = None
finally:
# Make sure connection is closed
if conn is not None:
conn.close()
return id
def delete_device(self, group_id, device_id):
self.clear_last_error()
conn = None
try:
conn = AtHomePowerlineServerDb.GetConnection()
c = AtHomePowerlineServerDb.GetCursor(conn)
# The results are sorted based on the most probable use
c.execute(
"DELETE FROM ActionGroupDevices WHERE group_id=:group_id AND device_id=:device_id",
{"group_id": group_id, "device_id": device_id}
)
conn.commit()
change_count = conn.total_changes
except Exception as ex:
self.set_last_error(ActionGroupDevices.SERVER_ERROR, str(ex))
change_count = 0
finally:
# Make sure connection is closed
if conn is not None:
conn.close()
return change_count
|
If you happen to be looking for content I’ve posted, you’ll have a better chance at using the search box which offers results as you type. It will greatly increase your chance of finding the exact content you’re looking for.
I’ve gotten a few request via email lately about finding specific content I posted. I highly recommend using the search option on the site. You’ll be able to find what you are looking for in real-time. The search displays results as you type, making it easier for you to find exactly what you’re after.
Alternatively to that, you can always use the categories that are available.
Next Clip-on mic with condenser or Shotgun mic or?
|
#! /usr/bin/env python
"""
Created on Thu Jun 29 11:02:28 2017
@author: njlyons
"""
from landlab import imshow_grid, RasterModelGrid
from landlab.plot.event_handler import query_grid_on_button_press
from matplotlib.pyplot import gcf
from matplotlib.backend_bases import Event
from numpy.testing import assert_equal
def test_query_grid_on_button_press():
rmg = RasterModelGrid((5, 5))
imshow_grid(rmg, rmg.nodes, cmap='RdYlBu')
# Programmatically create an event near the grid center.
event = Event('simulated_event', gcf().canvas)
event.xdata = int(rmg.number_of_node_columns * 0.5)
event.ydata = int(rmg.number_of_node_rows * 0.5)
results = query_grid_on_button_press(event, rmg)
x_coord = results['grid location']['x_coord']
y_coord = results['grid location']['x_coord']
msg = 'Items: Simulated matplotlib event and query results.'
assert_equal(x_coord, event.xdata, msg)
assert_equal(y_coord, event.ydata, msg)
msg = 'Items: Node ID and grid coordinates of simulated matplotlib event.'
node = rmg.grid_coords_to_node_id(event.xdata, event.ydata)
assert_equal(results['node']['ID'], node, msg)
|
During the initial order phone call I became frustrated due to the misleading nature of the infomercial. It was advertised as a $19.95 item. Then it became a $32.00 item with the second camera at $19.99. Then they wanted to upsell a larger memory card and a bunch of other unwanted crap. So I didn't even complete the order. That, of course, did not avoid a $50.00 charge on my credit card. The product really, really is a useless piece of crap and I'd loudly proclaim to anyone considering this product or this company to run for the hill. Extreme rip off!
Order that crap online, never use the telephone option.
|
#######################################################################
# #
# Copyright 2015 Max Planck Institute #
# for Dynamics and Self-Organization #
# #
# This file is part of bfps. #
# #
# bfps is free software: you can redistribute it and/or modify #
# it under the terms of the GNU General Public License as published #
# by the Free Software Foundation, either version 3 of the License, #
# or (at your option) any later version. #
# #
# bfps is distributed in the hope that it will be useful, #
# but WITHOUT ANY WARRANTY; without even the implied warranty of #
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the #
# GNU General Public License for more details. #
# #
# You should have received a copy of the GNU General Public License #
# along with bfps. If not, see <http://www.gnu.org/licenses/> #
# #
# Contact: Cristian.Lalescu@ds.mpg.de #
# #
#######################################################################
import os
import sys
import shutil
import subprocess
import argparse
import h5py
import math
import numpy as np
import warnings
import bfps
from ._code import _code
from bfps import tools
class PP(_code):
"""This class is meant to stitch together the C++ code into a final source file,
compile it, and handle all job launching.
"""
def __init__(
self,
work_dir = './',
simname = 'test'):
_code.__init__(
self,
work_dir = work_dir,
simname = simname)
self.host_info = {'type' : 'cluster',
'environment' : None,
'deltanprocs' : 1,
'queue' : '',
'mail_address': '',
'mail_events' : None}
self.generate_default_parameters()
return None
def set_precision(
self,
fluid_dtype):
if fluid_dtype in [np.float32, np.float64]:
self.fluid_dtype = fluid_dtype
elif fluid_dtype in ['single', 'double']:
if fluid_dtype == 'single':
self.fluid_dtype = np.dtype(np.float32)
elif fluid_dtype == 'double':
self.fluid_dtype = np.dtype(np.float64)
self.rtype = self.fluid_dtype
if self.rtype == np.float32:
self.ctype = np.dtype(np.complex64)
self.C_field_dtype = 'float'
self.fluid_precision = 'single'
elif self.rtype == np.float64:
self.ctype = np.dtype(np.complex128)
self.C_field_dtype = 'double'
self.fluid_precision = 'double'
return None
def write_src(self):
self.version_message = (
'/***********************************************************************\n' +
'* this code automatically generated by bfps\n' +
'* version {0}\n'.format(bfps.__version__) +
'***********************************************************************/\n\n\n')
self.include_list = [
'"base.hpp"',
'"scope_timer.hpp"',
'"fftw_interface.hpp"',
'"full_code/main_code.hpp"',
'<cmath>',
'<iostream>',
'<hdf5.h>',
'<string>',
'<cstring>',
'<fftw3-mpi.h>',
'<omp.h>',
'<cfenv>',
'<cstdlib>',
'"full_code/{0}.hpp"\n'.format(self.dns_type)]
self.main = """
int main(int argc, char *argv[])
{{
bool fpe = (
(getenv("BFPS_FPE_OFF") == nullptr) ||
(getenv("BFPS_FPE_OFF") != std::string("TRUE")));
return main_code< {0} >(argc, argv, fpe);
}}
""".format(self.dns_type + '<{0}>'.format(self.C_field_dtype))
self.includes = '\n'.join(
['#include ' + hh
for hh in self.include_list])
with open(self.name + '.cpp', 'w') as outfile:
outfile.write(self.version_message + '\n\n')
outfile.write(self.includes + '\n\n')
outfile.write(self.main + '\n')
return None
def generate_default_parameters(self):
# these parameters are relevant for all PP classes
self.parameters['dealias_type'] = int(1)
self.parameters['dkx'] = float(1.0)
self.parameters['dky'] = float(1.0)
self.parameters['dkz'] = float(1.0)
self.parameters['nu'] = float(0.1)
self.parameters['fmode'] = int(1)
self.parameters['famplitude'] = float(0.5)
self.parameters['fk0'] = float(2.0)
self.parameters['fk1'] = float(4.0)
self.parameters['forcing_type'] = 'linear'
self.pp_parameters = {}
self.pp_parameters['iteration_list'] = np.zeros(1).astype(np.int)
return None
def extra_postprocessing_parameters(
self,
dns_type = 'joint_acc_vel_stats'):
pars = {}
if dns_type == 'joint_acc_vel_stats':
pars['max_acceleration_estimate'] = float(10)
pars['max_velocity_estimate'] = float(1)
pars['histogram_bins'] = int(129)
return pars
def get_data_file_name(self):
return os.path.join(self.work_dir, self.simname + '.h5')
def get_data_file(self):
return h5py.File(self.get_data_file_name(), 'r')
def get_particle_file_name(self):
return os.path.join(self.work_dir, self.simname + '_particles.h5')
def get_particle_file(self):
return h5py.File(self.get_particle_file_name(), 'r')
def get_postprocess_file_name(self):
return os.path.join(self.work_dir, self.simname + '_postprocess.h5')
def get_postprocess_file(self):
return h5py.File(self.get_postprocess_file_name(), 'r')
def compute_statistics(self, iter0 = 0, iter1 = None):
"""Run basic postprocessing on raw data.
The energy spectrum :math:`E(t, k)` and the enstrophy spectrum
:math:`\\frac{1}{2}\omega^2(t, k)` are computed from the
.. math::
\sum_{k \\leq \\|\\mathbf{k}\\| \\leq k+dk}\\hat{u_i} \\hat{u_j}^*, \\hskip .5cm
\sum_{k \\leq \\|\\mathbf{k}\\| \\leq k+dk}\\hat{\omega_i} \\hat{\\omega_j}^*
tensors, and the enstrophy spectrum is also used to
compute the dissipation :math:`\\varepsilon(t)`.
These basic quantities are stored in a newly created HDF5 file,
``simname_postprocess.h5``.
"""
if len(list(self.statistics.keys())) > 0:
return None
self.read_parameters()
with self.get_data_file() as data_file:
if 'moments' not in data_file['statistics'].keys():
return None
iter0 = min((data_file['statistics/moments/velocity'].shape[0] *
self.parameters['niter_stat']-1),
iter0)
if type(iter1) == type(None):
iter1 = data_file['iteration'].value
else:
iter1 = min(data_file['iteration'].value, iter1)
ii0 = iter0 // self.parameters['niter_stat']
ii1 = iter1 // self.parameters['niter_stat']
self.statistics['kshell'] = data_file['kspace/kshell'].value
self.statistics['kM'] = data_file['kspace/kM'].value
self.statistics['dk'] = data_file['kspace/dk'].value
computation_needed = True
pp_file = h5py.File(self.get_postprocess_file_name(), 'a')
if 'ii0' in pp_file.keys():
computation_needed = not (ii0 == pp_file['ii0'].value and
ii1 == pp_file['ii1'].value)
if computation_needed:
for k in pp_file.keys():
del pp_file[k]
if computation_needed:
pp_file['iter0'] = iter0
pp_file['iter1'] = iter1
pp_file['ii0'] = ii0
pp_file['ii1'] = ii1
pp_file['t'] = (self.parameters['dt']*
self.parameters['niter_stat']*
(np.arange(ii0, ii1+1).astype(np.float)))
pp_file['energy(t, k)'] = (
data_file['statistics/spectra/velocity_velocity'][ii0:ii1+1, :, 0, 0] +
data_file['statistics/spectra/velocity_velocity'][ii0:ii1+1, :, 1, 1] +
data_file['statistics/spectra/velocity_velocity'][ii0:ii1+1, :, 2, 2])/2
pp_file['enstrophy(t, k)'] = (
data_file['statistics/spectra/vorticity_vorticity'][ii0:ii1+1, :, 0, 0] +
data_file['statistics/spectra/vorticity_vorticity'][ii0:ii1+1, :, 1, 1] +
data_file['statistics/spectra/vorticity_vorticity'][ii0:ii1+1, :, 2, 2])/2
pp_file['vel_max(t)'] = data_file['statistics/moments/velocity'] [ii0:ii1+1, 9, 3]
pp_file['renergy(t)'] = data_file['statistics/moments/velocity'][ii0:ii1+1, 2, 3]/2
for k in ['t',
'energy(t, k)',
'enstrophy(t, k)',
'vel_max(t)',
'renergy(t)']:
if k in pp_file.keys():
self.statistics[k] = pp_file[k].value
self.compute_time_averages()
return None
def compute_time_averages(self):
"""Compute easy stats.
Further computation of statistics based on the contents of
``simname_postprocess.h5``.
Standard quantities are as follows
(consistent with [Ishihara]_):
.. math::
U_{\\textrm{int}}(t) = \\sqrt{\\frac{2E(t)}{3}}, \\hskip .5cm
L_{\\textrm{int}}(t) = \\frac{\pi}{2U_{int}^2(t)} \\int \\frac{dk}{k} E(t, k), \\hskip .5cm
T_{\\textrm{int}}(t) =
\\frac{L_{\\textrm{int}}(t)}{U_{\\textrm{int}}(t)}
\\eta_K = \\left(\\frac{\\nu^3}{\\varepsilon}\\right)^{1/4}, \\hskip .5cm
\\tau_K = \\left(\\frac{\\nu}{\\varepsilon}\\right)^{1/2}, \\hskip .5cm
\\lambda = \\sqrt{\\frac{15 \\nu U_{\\textrm{int}}^2}{\\varepsilon}}
Re = \\frac{U_{\\textrm{int}} L_{\\textrm{int}}}{\\nu}, \\hskip
.5cm
R_{\\lambda} = \\frac{U_{\\textrm{int}} \\lambda}{\\nu}
.. [Ishihara] T. Ishihara et al,
*Small-scale statistics in high-resolution direct numerical
simulation of turbulence: Reynolds number dependence of
one-point velocity gradient statistics*.
J. Fluid Mech.,
**592**, 335-366, 2007
"""
for key in ['energy', 'enstrophy']:
self.statistics[key + '(t)'] = (self.statistics['dk'] *
np.sum(self.statistics[key + '(t, k)'], axis = 1))
self.statistics['Uint(t)'] = np.sqrt(2*self.statistics['energy(t)'] / 3)
self.statistics['Lint(t)'] = ((self.statistics['dk']*np.pi /
(2*self.statistics['Uint(t)']**2)) *
np.nansum(self.statistics['energy(t, k)'] /
self.statistics['kshell'][None, :], axis = 1))
for key in ['energy',
'enstrophy',
'vel_max',
'Uint',
'Lint']:
if key + '(t)' in self.statistics.keys():
self.statistics[key] = np.average(self.statistics[key + '(t)'], axis = 0)
for suffix in ['', '(t)']:
self.statistics['diss' + suffix] = (self.parameters['nu'] *
self.statistics['enstrophy' + suffix]*2)
self.statistics['etaK' + suffix] = (self.parameters['nu']**3 /
self.statistics['diss' + suffix])**.25
self.statistics['tauK' + suffix] = (self.parameters['nu'] /
self.statistics['diss' + suffix])**.5
self.statistics['Re' + suffix] = (self.statistics['Uint' + suffix] *
self.statistics['Lint' + suffix] /
self.parameters['nu'])
self.statistics['lambda' + suffix] = (15 * self.parameters['nu'] *
self.statistics['Uint' + suffix]**2 /
self.statistics['diss' + suffix])**.5
self.statistics['Rlambda' + suffix] = (self.statistics['Uint' + suffix] *
self.statistics['lambda' + suffix] /
self.parameters['nu'])
self.statistics['kMeta' + suffix] = (self.statistics['kM'] *
self.statistics['etaK' + suffix])
if self.parameters['dealias_type'] == 1:
self.statistics['kMeta' + suffix] *= 0.8
self.statistics['Tint'] = self.statistics['Lint'] / self.statistics['Uint']
self.statistics['Taylor_microscale'] = self.statistics['lambda']
return None
def set_plt_style(
self,
style = {'dashes' : (None, None)}):
self.style.update(style)
return None
def convert_complex_from_binary(
self,
field_name = 'vorticity',
iteration = 0,
file_name = None):
"""read the Fourier representation of a vector field.
Read the binary file containing iteration ``iteration`` of the
field ``field_name``, and write it in a ``.h5`` file.
"""
data = np.memmap(
os.path.join(self.work_dir,
self.simname + '_{0}_i{1:0>5x}'.format('c' + field_name, iteration)),
dtype = self.ctype,
mode = 'r',
shape = (self.parameters['ny'],
self.parameters['nz'],
self.parameters['nx']//2+1,
3))
if type(file_name) == type(None):
file_name = self.simname + '_{0}_i{1:0>5x}.h5'.format('c' + field_name, iteration)
file_name = os.path.join(self.work_dir, file_name)
f = h5py.File(file_name, 'a')
f[field_name + '/complex/{0}'.format(iteration)] = data
f.close()
return None
def job_parser_arguments(
self,
parser):
parser.add_argument(
'--ncpu',
type = int,
dest = 'ncpu',
default = -1)
parser.add_argument(
'--np', '--nprocesses',
metavar = 'NPROCESSES',
help = 'number of mpi processes to use',
type = int,
dest = 'nb_processes',
default = 4)
parser.add_argument(
'--ntpp', '--nthreads-per-process',
type = int,
dest = 'nb_threads_per_process',
metavar = 'NTHREADS_PER_PROCESS',
help = 'number of threads to use per MPI process',
default = 1)
parser.add_argument(
'--no-submit',
action = 'store_true',
dest = 'no_submit')
parser.add_argument(
'--environment',
type = str,
dest = 'environment',
default = None)
parser.add_argument(
'--minutes',
type = int,
dest = 'minutes',
default = 5,
help = 'If environment supports it, this is the requested wall-clock-limit.')
parser.add_argument(
'--njobs',
type = int, dest = 'njobs',
default = 1)
return None
def simulation_parser_arguments(
self,
parser):
parser.add_argument(
'--simname',
type = str, dest = 'simname',
default = 'test')
parser.add_argument(
'--wd',
type = str, dest = 'work_dir',
default = './')
parser.add_argument(
'--precision',
choices = ['single', 'double'],
type = str,
default = 'single')
parser.add_argument(
'--iter0',
type = int,
dest = 'iter0',
default = 0)
parser.add_argument(
'--iter1',
type = int,
dest = 'iter1',
default = 0)
return None
def particle_parser_arguments(
self,
parser):
parser.add_argument(
'--particle-rand-seed',
type = int,
dest = 'particle_rand_seed',
default = None)
parser.add_argument(
'--pclouds',
type = int,
dest = 'pclouds',
default = 1,
help = ('number of particle clouds. Particle "clouds" '
'consist of particles distributed according to '
'pcloud-type.'))
parser.add_argument(
'--pcloud-type',
choices = ['random-cube',
'regular-cube'],
dest = 'pcloud_type',
default = 'random-cube')
parser.add_argument(
'--particle-cloud-size',
type = float,
dest = 'particle_cloud_size',
default = 2*np.pi)
return None
def add_parser_arguments(
self,
parser):
subparsers = parser.add_subparsers(
dest = 'DNS_class',
help = 'type of simulation to run')
subparsers.required = True
parser_native_binary_to_hdf5 = subparsers.add_parser(
'native_binary_to_hdf5',
help = 'convert native binary to hdf5')
self.simulation_parser_arguments(parser_native_binary_to_hdf5)
self.job_parser_arguments(parser_native_binary_to_hdf5)
self.parameters_to_parser_arguments(parser_native_binary_to_hdf5)
parser_get_rfields = subparsers.add_parser(
'get_rfields',
help = 'get real space velocity field')
self.simulation_parser_arguments(parser_get_rfields)
self.job_parser_arguments(parser_get_rfields)
self.parameters_to_parser_arguments(parser_get_rfields)
parser_joint_acc_vel_stats = subparsers.add_parser(
'joint_acc_vel_stats',
help = 'get joint acceleration and velocity statistics')
self.simulation_parser_arguments(parser_joint_acc_vel_stats)
self.job_parser_arguments(parser_joint_acc_vel_stats)
self.parameters_to_parser_arguments(parser_joint_acc_vel_stats)
self.parameters_to_parser_arguments(
parser_joint_acc_vel_stats,
parameters = self.extra_postprocessing_parameters('joint_acc_vel_stats'))
return None
def prepare_launch(
self,
args = []):
"""Set up reasonable parameters.
With the default Lundgren forcing applied in the band [2, 4],
we can estimate the dissipation, therefore we can estimate
:math:`k_M \\eta_K` and constrain the viscosity.
In brief, the command line parameter :math:`k_M \\eta_K` is
used in the following formula for :math:`\\nu` (:math:`N` is the
number of real space grid points per coordinate):
.. math::
\\nu = \\left(\\frac{2 k_M \\eta_K}{N} \\right)^{4/3}
With this choice, the average dissipation :math:`\\varepsilon`
will be close to 0.4, and the integral scale velocity will be
close to 0.77, yielding the approximate value for the Taylor
microscale and corresponding Reynolds number:
.. math::
\\lambda \\approx 4.75\\left(\\frac{2 k_M \\eta_K}{N} \\right)^{4/6}, \\hskip .5in
R_\\lambda \\approx 3.7 \\left(\\frac{N}{2 k_M \\eta_K} \\right)^{4/6}
"""
opt = _code.prepare_launch(self, args = args)
self.set_precision(opt.precision)
self.dns_type = opt.DNS_class
self.name = self.dns_type + '-' + self.fluid_precision + '-v' + bfps.__version__
# merge parameters if needed
for k in self.pp_parameters.keys():
self.parameters[k] = self.pp_parameters[k]
self.pars_from_namespace(opt)
niter_out = self.get_data_file()['parameters/niter_out'].value
assert(opt.iter0 % niter_out == 0)
self.pp_parameters['iteration_list'] = np.arange(
opt.iter0, opt.iter1+niter_out, niter_out, dtype = np.int)
return opt
def launch(
self,
args = [],
**kwargs):
opt = self.prepare_launch(args = args)
self.launch_jobs(opt = opt, **kwargs)
return None
def get_checkpoint_0_fname(self):
return os.path.join(
self.work_dir,
self.simname + '_checkpoint_0.h5')
def generate_tracer_state(
self,
rseed = None,
species = 0):
with h5py.File(self.get_checkpoint_0_fname(), 'a') as data_file:
dset = data_file[
'tracers{0}/state/0'.format(species)]
if not type(rseed) == type(None):
np.random.seed(rseed)
nn = self.parameters['nparticles']
cc = int(0)
batch_size = int(1e6)
while nn > 0:
if nn > batch_size:
dset[cc*batch_size:(cc+1)*batch_size] = np.random.random(
(batch_size, 3))*2*np.pi
nn -= batch_size
else:
dset[cc*batch_size:cc*batch_size+nn] = np.random.random(
(nn, 3))*2*np.pi
nn = 0
cc += 1
return None
def generate_vector_field(
self,
rseed = 7547,
spectra_slope = 1.,
amplitude = 1.,
iteration = 0,
field_name = 'vorticity',
write_to_file = False,
# to switch to constant field, use generate_data_3D_uniform
# for scalar_generator
scalar_generator = tools.generate_data_3D):
"""generate vector field.
The generated field is not divergence free, but it has the proper
shape.
:param rseed: seed for random number generator
:param spectra_slope: spectrum of field will look like k^(-p)
:param amplitude: all amplitudes are multiplied with this value
:param iteration: the field is written at this iteration
:param field_name: the name of the field being generated
:param write_to_file: should we write the field to file?
:param scalar_generator: which function to use for generating the
individual components.
Possible values: bfps.tools.generate_data_3D,
bfps.tools.generate_data_3D_uniform
:type rseed: int
:type spectra_slope: float
:type amplitude: float
:type iteration: int
:type field_name: str
:type write_to_file: bool
:type scalar_generator: function
:returns: ``Kdata``, a complex valued 4D ``numpy.array`` that uses the
transposed FFTW layout.
Kdata[ky, kz, kx, i] is the amplitude of mode (kx, ky, kz) for
the i-th component of the field.
(i.e. x is the fastest index and z the slowest index in the
real-space representation).
"""
np.random.seed(rseed)
Kdata00 = scalar_generator(
self.parameters['nz']//2,
self.parameters['ny']//2,
self.parameters['nx']//2,
p = spectra_slope,
amplitude = amplitude).astype(self.ctype)
Kdata01 = scalar_generator(
self.parameters['nz']//2,
self.parameters['ny']//2,
self.parameters['nx']//2,
p = spectra_slope,
amplitude = amplitude).astype(self.ctype)
Kdata02 = scalar_generator(
self.parameters['nz']//2,
self.parameters['ny']//2,
self.parameters['nx']//2,
p = spectra_slope,
amplitude = amplitude).astype(self.ctype)
Kdata0 = np.zeros(
Kdata00.shape + (3,),
Kdata00.dtype)
Kdata0[..., 0] = Kdata00
Kdata0[..., 1] = Kdata01
Kdata0[..., 2] = Kdata02
Kdata1 = tools.padd_with_zeros(
Kdata0,
self.parameters['nz'],
self.parameters['ny'],
self.parameters['nx'])
if write_to_file:
Kdata1.tofile(
os.path.join(self.work_dir,
self.simname + "_c{0}_i{1:0>5x}".format(field_name, iteration)))
return Kdata1
def copy_complex_field(
self,
src_file_name,
src_dset_name,
dst_file,
dst_dset_name,
make_link = True):
# I define a min_shape thingie, but for now I only trust this method for
# the case of increasing/decreasing by the same factor in all directions.
# in principle we could write something more generic, but i'm not sure
# how complicated that would be
dst_shape = (self.parameters['nz'],
self.parameters['ny'],
(self.parameters['nx']+2) // 2,
3)
src_file = h5py.File(src_file_name, 'r')
if (src_file[src_dset_name].shape == dst_shape):
if make_link and (src_file[src_dset_name].dtype == self.ctype):
dst_file[dst_dset_name] = h5py.ExternalLink(
src_file_name,
src_dset_name)
else:
dst_file.create_dataset(
dst_dset_name,
shape = dst_shape,
dtype = self.ctype,
fillvalue = 0.0)
for kz in range(src_file[src_dset_name].shape[0]):
dst_file[dst_dset_name][kz] = src_file[src_dset_name][kz]
else:
print('aloha')
min_shape = (min(dst_shape[0], src_file[src_dset_name].shape[0]),
min(dst_shape[1], src_file[src_dset_name].shape[1]),
min(dst_shape[2], src_file[src_dset_name].shape[2]),
3)
print(self.ctype)
dst_file.create_dataset(
dst_dset_name,
shape = dst_shape,
dtype = np.dtype(self.ctype),
fillvalue = complex(0))
for kz in range(min_shape[0]):
dst_file[dst_dset_name][kz,:min_shape[1], :min_shape[2]] = \
src_file[src_dset_name][kz, :min_shape[1], :min_shape[2]]
return None
def prepare_post_file(
self,
opt = None):
self.pp_parameters.update(
self.extra_postprocessing_parameters(self.dns_type))
self.pars_from_namespace(
opt,
parameters = self.pp_parameters,
get_sim_info = False)
for kk in ['nx', 'ny', 'nz']:
self.parameters[kk] = self.get_data_file()['parameters/' + kk].value
n = self.parameters['nx']
if self.dns_type in ['filtered_slices',
'filtered_acceleration']:
if type(opt.klist_kmax) == type(None):
opt.klist_kmax = n / 3.
if type(opt.klist_kmin) == type(None):
opt.klist_kmin = 6.
kvals = bfps_addons.tools.power_space_array(
power = opt.klist_power,
size = opt.klist_size,
vmin = opt.klist_kmin,
vmax = opt.klist_kmax)
if opt.test_klist:
for i in range(opt.klist_size):
print('kcut{0} = {1}, ell{0} = {2:.3e}'.format(
i, kvals[i], 2*np.pi / kvals[i]))
opt.no_submit = True
self.pp_parameters['kcut'] = kvals
self.rewrite_par(
group = self.dns_type + '/parameters',
parameters = self.pp_parameters,
file_name = os.path.join(self.work_dir, self.simname + '_post.h5'))
histogram_bins = opt.histogram_bins
if (type(histogram_bins) == type(None) and
'histogram_bins' in self.pp_parameters.keys()):
histogram_bins = self.pp_parameters['histogram_bins']
with h5py.File(os.path.join(self.work_dir, self.simname + '_post.h5'), 'r+') as ofile:
group = ofile[self.dns_type]
group.require_group('histograms')
group.require_group('moments')
group.require_group('spectra')
vec_spectra_stats = []
vec4_rspace_stats = []
scal_rspace_stats = []
if self.dns_type == 'joint_acc_vel_stats':
vec_spectra_stats.append('velocity')
vec4_rspace_stats.append('velocity')
vec_spectra_stats.append('acceleration')
vec4_rspace_stats.append('acceleration')
for quantity in scal_rspace_stats:
if quantity not in group['histograms'].keys():
time_chunk = 2**20 // (8*histogram_bins)
time_chunk = max(time_chunk, 1)
group['histograms'].create_dataset(
quantity,
(1, histogram_bins),
chunks = (time_chunk, histogram_bins),
maxshape = (None, histogram_bins),
dtype = np.int64)
else:
assert(histogram_bins ==
group['histograms/' + quantity].shape[1])
if quantity not in group['moments'].keys():
time_chunk = 2**20 // (8*10)
time_chunk = max(time_chunk, 1)
group['moments'].create_dataset(
quantity,
(1, 10),
chunks = (time_chunk, 10),
maxshape = (None, 10),
dtype = np.float64)
if self.dns_type == 'joint_acc_vel_stats':
quantity = 'acceleration_and_velocity_components'
if quantity not in group['histograms'].keys():
time_chunk = 2**20 // (8*9*histogram_bins**2)
time_chunk = max(time_chunk, 1)
group['histograms'].create_dataset(
quantity,
(1, histogram_bins, histogram_bins, 3, 3),
chunks = (time_chunk, histogram_bins, histogram_bins, 3, 3),
maxshape = (None, histogram_bins, histogram_bins, 3, 3),
dtype = np.int64)
quantity = 'acceleration_and_velocity_magnitudes'
if quantity not in group['histograms'].keys():
time_chunk = 2**20 // (8*histogram_bins**2)
time_chunk = max(time_chunk, 1)
group['histograms'].create_dataset(
quantity,
(1, histogram_bins, histogram_bins),
chunks = (time_chunk, histogram_bins, histogram_bins),
maxshape = (None, histogram_bins, histogram_bins),
dtype = np.int64)
ncomps = 4
for quantity in vec4_rspace_stats:
if quantity not in group['histograms'].keys():
time_chunk = 2**20 // (8*histogram_bins*ncomps)
time_chunk = max(time_chunk, 1)
group['histograms'].create_dataset(
quantity,
(1, histogram_bins, ncomps),
chunks = (time_chunk, histogram_bins, ncomps),
maxshape = (None, histogram_bins, ncomps),
dtype = np.int64)
if quantity not in group['moments'].keys():
time_chunk = 2**20 // (8*10*ncomps)
time_chunk = max(time_chunk, 1)
group['moments'].create_dataset(
quantity,
(1, 10, ncomps),
chunks = (time_chunk, 10, ncomps),
maxshape = (None, 10, ncomps),
dtype = np.float64)
time_chunk = 2**20 // (
4*3*
self.parameters['nx']*self.parameters['ny'])
time_chunk = max(time_chunk, 1)
for quantity in vec_spectra_stats:
df = self.get_data_file()
if quantity + '_' + quantity not in group['spectra'].keys():
spec_chunks = df['statistics/spectra/velocity_velocity'].chunks
spec_shape = df['statistics/spectra/velocity_velocity'].shape
spec_maxshape = df['statistics/spectra/velocity_velocity'].maxshape
group['spectra'].create_dataset(
quantity + '_' + quantity,
spec_shape,
chunks = spec_chunks,
maxshape = spec_maxshape,
dtype = np.float64)
df.close()
return None
def prepare_field_file(self):
df = self.get_data_file()
if 'field_dtype' in df.keys():
# we don't need to do anything, raw binary files are used
return None
last_iteration = df['iteration'].value
cppf = df['parameters/checkpoints_per_file'].value
niter_out = df['parameters/niter_out'].value
with h5py.File(os.path.join(self.work_dir, self.simname + '_fields.h5'), 'a') as ff:
ff.require_group('vorticity')
ff.require_group('vorticity/complex')
checkpoint = 0
while True:
cpf_name = os.path.join(
self.work_dir,
self.simname + '_checkpoint_{0}.h5'.format(checkpoint))
if os.path.exists(cpf_name):
cpf = h5py.File(cpf_name, 'r')
for iter_name in cpf['vorticity/complex'].keys():
if iter_name not in ff['vorticity/complex'].keys():
ff['vorticity/complex/' + iter_name] = h5py.ExternalLink(
cpf_name,
'vorticity/complex/' + iter_name)
checkpoint += 1
else:
break
return None
def launch_jobs(
self,
opt = None,
particle_initial_condition = None):
self.prepare_post_file(opt)
self.prepare_field_file()
self.run(
nb_processes = opt.nb_processes,
nb_threads_per_process = opt.nb_threads_per_process,
njobs = opt.njobs,
hours = opt.minutes // 60,
minutes = opt.minutes % 60,
no_submit = opt.no_submit,
err_file = 'err_' + self.dns_type,
out_file = 'out_' + self.dns_type)
return None
|
1955 ford wiring online wiring diagram 1946 ford wiring wiring library1955 chevy generator wiring diagram inside 55 with philteg in 1973 ford. Leslie mandoki produces audi motorsport theme song. Honda nova wiring diagram wiring diagram wire diagram honda nova dash wiring diagramwire diagram honda nova dash wiring schematic diagramturn signal wiring. Dash wiring ford f 1 wiring diagram wrg 2891 dash wiring diagram f 1501997 ford f150 fuse box diagram under dash \.
73 blazer fuel gauge wiring diagram wiring diagram 73 blazer fuel gauge wiring diagram. 1970 ford f100 240 wiring wiring diagram 1970 ford f100 240 wiring wiring diagrams1970 ford f100 240 wiring wiring diagram 1970 f100 instrument. 1969 dodge charger instrument panel wiring diagram free picture 2 1973 barracuda wiring diagram wiring diagram rh vw19 reise ferienplan de 1974 plymouth cuda wiring diagrams. Wrg0325 1997 ford f250 fuse diagram 1973 f250 fuse box diagram explained wiring diagrams rh dmdelectro co 1979 ford f250 fuse box. 1972 mustang wiring diagram 7cotsamzptimmarshallinfo • 1972 mustang wiring harness wiring diagram rh c23 mikroflex de 1972 mustang mach 1 wiring diagram.
Wrg9423 rca diagram wiring 7 2887a 1973 1979 ford truck wiring diagrams schematics fordification rh daytonva150 2010 ford f150 46 engine. 1973 1979 ford truck wiring diagrams schematics fordification 1973 1979 ford truck wiring diagrams schematics fordification best of f250 diagram. 73 chevy wiring diagram 6jheemmvvsouthdarfurradioinfo • plete 73 87 wiring diagrams rh forum 73 87chevytrucks 7387 chevy truck instrument. Wrg0721 73 ford f250 wiring 73 ford f250 ignition wiring diagram 73 dodge charger. 1967 cougar wiring diagrams wiring diagram 1968 ford f100 instrument cluster wiring diagram wiring diagramswiring diagram for a 1968 ford mustang wiring. Wrg1374 1973 et wiring diagram 1964fordwiringdiagramet01large 871450 bytes 1964 ranchero wiring diagrams.
1956 f100 wiring diagram wiring library wiring diagrams 1964 ford 500 diagram best of fairlane f100 8 ford f100 wiring diagrams 1964. 1969 ford f100 alternator wiring diagram 9xeghaqqtpetportalinfo • 1973 ford alternator wiring diagram wiring diagrams clicks rh 14 qqq liquid cactus de. 3500 chevy dash cluster wire diagram wiring diagram 97 trans am wiring diagram wiring diagrams93 trans am wiring diagram circuit diagram template 1973 trans. 1973 mustang cluster wiring diagrams wiring diagram 1974 mustang wiring diagram yvvoxuue ssiew co \u2022wrg 7159 1974 mustang fuse panel diagram rh. 1966 ford alternator wiring better wiring diagram online 73 ford f100 wiring diagram wiring diagram onlinelifted also 1979 gmc truck wiring diagram on 1965.
|
import os
SF_MIRROR = 'http://iweb.dl.sourceforge.net'
pythons = {#'26': 7,
'27': 7,
#'32': 7,
'33': 7.1,
'34': 7.1}
VIRT_BASE = "c:/vp/"
X64_EXT = os.environ.get('X64_EXT', "x64")
libs = {
'zlib': {
'url': 'http://zlib.net/zlib128.zip',
'hash': 'md5:126f8676442ffbd97884eb4d6f32afb4',
'dir': 'zlib-1.2.8',
},
'jpeg': {
'url': 'http://www.ijg.org/files/jpegsr9a.zip',
'hash': 'md5:a34f3c82760270ee1e1885b15b90a72e', # not found - generated by wiredfool
'dir': 'jpeg-9a',
},
'tiff': {
'url': 'ftp://ftp.remotesensing.org/pub/libtiff/tiff-4.0.4.zip',
'hash': 'md5:8f538a34156188f9a8dcddb679c65d1e',
'dir': 'tiff-4.0.4',
},
'freetype': {
'url': 'http://download.savannah.gnu.org/releases/freetype/freetype-2.6.tar.gz',
'hash': 'md5:1d733ea6c1b7b3df38169fbdbec47d2b',
'dir': 'freetype-2.6',
},
'lcms': {
'url': SF_MIRROR+'/project/lcms/lcms/2.7/lcms2-2.7.zip',
'hash': 'sha1:7ff1a5b721ca719760ba6eb4ec6f38d5e65381cf',
'dir': 'lcms2-2.7',
},
'tcl-8.5': {
'url': SF_MIRROR+'/project/tcl/Tcl/8.5.18/tcl8518-src.zip',
'hash': 'sha1:4c2aed9043088c630a4c795265e2738ef1b4db3b',
'dir': '',
},
'tk-8.5': {
'url': SF_MIRROR+'/project/tcl/Tcl/8.5.18/tk8518-src.zip',
'hash': 'sha1:273f55148777413774aa722ecad25cabda1e31ae',
'dir': '',
'version':'8.5.18',
},
'tcl-8.6': {
'url': SF_MIRROR+'/project/tcl/Tcl/8.6.4/tcl864-src.zip',
'hash': 'md5:35748d2fc61e08a2fdb23b85c6f8c4a0',
'dir': '',
},
'tk-8.6': {
'url': SF_MIRROR+'/project/tcl/Tcl/8.6.4/tk864-src.zip',
'hash': 'md5:111d45061a69e7f5250b6ec8ca7c4f35',
'dir': '',
'version':'8.6.4',
},
'webp': {
'url': 'http://downloads.webmproject.org/releases/webp/libwebp-0.4.3.tar.gz',
'hash': 'sha1:1c307a61c4d0018620b4ba9a58e8f48a8d6640ef',
'dir': 'libwebp-0.4.3',
},
'openjpeg': {
'url': SF_MIRROR+'/project/openjpeg/openjpeg/2.1.0/openjpeg-2.1.0.tar.gz',
'hash': 'md5:f6419fcc233df84f9a81eb36633c6db6',
'dir': 'openjpeg-2.1.0',
},
}
bin_libs = {
'openjpeg': {
'filename': 'openjpeg-2.0.0-win32-x86.zip',
'hash': 'sha1:xxx',
'version': '2.0'
},
}
compilers = {
(7, 64): {
'env_version': 'v7.0',
'vc_version': '2008',
'env_flags': '/x64 /xp',
'inc_dir': 'msvcr90-x64',
'platform': 'x64',
'webp_platform': 'x64',
},
(7, 32): {
'env_version': 'v7.0',
'vc_version': '2008',
'env_flags': '/x86 /xp',
'inc_dir': 'msvcr90-x32',
'platform': 'Win32',
'webp_platform': 'x86',
},
(7.1, 64): {
'env_version': 'v7.1',
'vc_version': '2010',
'env_flags': '/x64 /vista',
'inc_dir': 'msvcr10-x64',
'platform': 'x64',
'webp_platform': 'x64',
},
(7.1, 32): {
'env_version': 'v7.1',
'vc_version': '2010',
'env_flags': '/x86 /vista',
'inc_dir': 'msvcr10-x32',
'platform': 'Win32',
'webp_platform': 'x86',
},
}
def pyversion_fromEnv():
py = os.environ['PYTHON']
py_version = '27'
for k in pythons.keys():
if k in py:
py_version = k
break
if '64' in py:
py_version = '%s%s' % (py_version, X64_EXT)
return py_version
def compiler_fromEnv():
py = os.environ['PYTHON']
for k, v in pythons.items():
if k in py:
compiler_version = v
break
bit = 32
if '64' in py:
bit = 64
return compilers[(compiler_version, bit)]
|
>> "foo xs 0 = True"
>> | "foo (Suc _) = False"
>> | "foo (x # xs) (Suc n) = foo xs n"
> away patterns involving Suc, so that you don't have to do it yourself.
> each rule with a "Suc n" pattern, there is a matching rule with a "0"
> pattern in its place.
> "foo 0 = True"
> | "foo (x # xs) 0 = True"
> of Code_Nat.thy, where I have modified the code preprocessor.
| "double n = n + n"
|
from __future__ import unicode_literals
import boto
import six
from nose.plugins.skip import SkipTest
def version_tuple(v):
return tuple(map(int, (v.split("."))))
# Note: See https://github.com/spulec/moto/issues/201 for why this is a separate method.
def skip_test():
raise SkipTest
class requires_boto_gte(object):
"""Decorator for requiring boto version greater than or equal to 'version'"""
def __init__(self, version):
self.version = version
def __call__(self, test):
boto_version = version_tuple(boto.__version__)
required = version_tuple(self.version)
if boto_version >= required:
return test
return skip_test
class py3_requires_boto_gte(object):
"""Decorator for requiring boto version greater than or equal to 'version'
when running on Python 3. (Not all of boto is Python 3 compatible.)"""
def __init__(self, version):
self.version = version
def __call__(self, test):
if not six.PY3:
return test
boto_version = version_tuple(boto.__version__)
required = version_tuple(self.version)
if boto_version >= required:
return test
return skip_test
|
The window opens just fine with the parameter (id) i have sent.
How can i set those properties?
You can add .. name="yourwindowname"
Forget to say that you should change target="yourwindowname"
> The window opens just fine with the parameter (id) i have sent.
> How can i set those properties?
do the job. you can add not asp events to asp controls too.
How to set property of an asp:Hyperlink in a template from a user control.
|
# SIM-CITY webservice
#
# Copyright 2015 Joris Borgdorff <j.borgdorff@esciencecenter.nl>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from gevent import monkey; monkey.patch_all()
from bottle import post, get, run, delete, request, response, HTTPResponse
import simcity
from simcity.util import listfiles
from simcityweb.util import error, get_simulation_config
import simcityexplore
from couchdb.http import ResourceConflict
from picas.documents import Document
config_sim = simcity.get_config().section('Simulations')
couch_cfg = simcity.get_config().section('task-db')
@get('/explore/simulate/<name>/<version>')
def get_simulation_by_name_version(name, version=None):
try:
sim, version = get_simulation_config(name, version, config_sim)
return sim[version]
except HTTPResponse as ex:
return ex
@get('/explore/simulate/<name>')
def get_simulation_by_name(name):
try:
sim, version = get_simulation_config(name, None, config_sim)
return sim
except HTTPResponse as ex:
return ex
@get('/explore')
def explore():
return "API: overview | simulate | job"
@get('/explore/simulate')
def simulate_list():
files = listfiles(config_sim['path'])
return {"simulations": [f[:-5] for f in files if f.endswith('.json')]}
@post('/explore/simulate/<name>/<version>')
def simulate_name_version(name, version=None):
try:
sim, version = get_simulation_config(name, version, config_sim)
sim = sim[version]
query = dict(request.json)
task_id = None
if '_id' in query:
task_id = query['_id']
del query['_id']
params = simcityexplore.parse_parameters(query, sim['parameters'])
except HTTPResponse as ex:
return ex
except ValueError as ex:
return error(400, ex.message)
task_props = {
'name': name,
'ensemble': query['ensemble'],
'command': sim['command'],
'version': version,
'input': params,
}
if task_id is not None:
task_props['_id'] = task_id
try:
token = simcity.add_task(task_props)
except ResourceConflict:
return error(400, "simulation name " + task_id + " already taken")
try:
simcity.submit_if_needed(config_sim['default_host'], 1)
except:
pass # too bad. User can call /explore/job.
response.status = 201 # created
url = '%s%s/%s' % (couch_cfg.get('public_url', couch_cfg['url']), couch_cfg['database'],
token.id)
response.set_header('Location', url)
return token.value
@post('/explore/simulate/<name>')
def simulate_name(name):
return simulate_name_version(name)
@get('/explore/view/totals')
def overview():
try:
return simcity.overview_total()
except:
return error(500, "cannot read overview")
@post('/explore/job')
def submit_job():
return submit_job_to_host(config_sim['default_host'])
@post('/explore/job/<host>')
def submit_job_to_host(host):
try:
job = simcity.submit_if_needed(host, int(config_sim['max_jobs']))
except ValueError:
return error(404, "Host " + host + " unknown")
except IOError:
return error(502, "Cannot connect to host")
else:
if job is None:
response.status = 503 # service temporarily unavailable
else:
response.status = 201 # created
return {key: job[key] for key in ['_id', 'batch_id', 'hostname']}
@get('/explore/view/simulations/<name>/<version>')
def simulations_view(name, version):
sim, version = get_simulation_config(name, version, config_sim)
design_doc = name + '_' + version
doc_id = '_design/' + design_doc
task_db = simcity.get_task_database()
try:
task_db.get(doc_id)
except:
map_fun = """
function(doc) {
if (doc.type === "task" && doc.name === "%s" && doc.version === "%s") {
emit(doc._id, {
"id": doc._id,
"rev": doc._rev,
"ensemble": doc.ensemble,
"url": "%s%s/" + doc._id,
"error": doc.error,
"lock": doc.lock,
"done": doc.done,
"input": doc.input
});
}
}""" % (name, version, '/couchdb/', couch_cfg['database'])
task_db.add_view('all_docs', map_fun, design_doc=design_doc)
url = '%s%s/%s/_view/all_docs' % ('/couchdb/', # couch_cfg['public_url'],
couch_cfg['database'], doc_id)
response.status = 302 # temporary redirect
response.set_header('Location', url)
return
@get('/explore/view/simulations/<name>/<version>/<ensemble>')
def ensemble_view(name, version, ensemble):
sim, version = get_simulation_config(name, version, config_sim)
design_doc = '{}_{}_{}'.format(name, version, ensemble)
doc_id = '_design/' + design_doc
task_db = simcity.get_task_database()
try:
task_db.get(doc_id)
except:
map_fun = """
function(doc) {
if (doc.type === "task" && doc.name === "%s" && doc.version === "%s" && doc.ensemble === "%s") {
emit(doc._id, {
"id": doc._id,
"rev": doc._rev,
"url": "%s%s/" + doc._id,
"error": doc.error,
"lock": doc.lock,
"done": doc.done,
"input": doc.input
});
}
}""" % (name, version, ensemble, '/couchdb/', couch_cfg['database'])
task_db.add_view('all_docs', map_fun, design_doc=design_doc)
url = '%s%s/%s/_view/all_docs' % ('/couchdb/', # couch_cfg['public_url'],
couch_cfg['database'], doc_id)
response.status = 302 # temporary redirect
response.set_header('Location', url)
return
@get('/explore/simulation/<id>')
def get_simulation(id):
try:
return simcity.get_task_database().get(id).value
except ValueError:
return error(404, "simulation does not exist")
@delete('/explore/simulation/<id>')
def del_simulation(id):
rev = request.query.get('rev')
if rev is None:
rev = request.get_header('If-Match')
if rev is None:
return error(409, "revision not specified")
task = Document({'_id': id, '_rev': rev})
try:
simcity.get_task_database().delete(task)
return {'ok': True}
except ResourceConflict:
return error(409, "resource conflict")
run(host='localhost', port=9090, server='gevent')
|
The attack occurred on one of Stockholm's busiest pedestrian streets.
A truck rammed into a crowded street in Central Stockholm Friday afternoon, killing two to three people and injuring a “large number,” according to police. “Sweden has been attacked. Everything indicates an act of terror,” Swedish Prime Minister Stefan Lofven said in a press conference shortly after the attack.
The incident took place on Drottninggatan, one of the city’s busiest pedestrian streets near major shopping centers. Witnesses say they saw a large truck plough through a crowd and slam into a department store. “I was on my way to the exit and just saw the wall coming towards us like an avalanche,” said one witness, Christoffer as TheLocal.se reported. “People turned in panic and fled towards the exits. Then the main thing was to get away from the scene as quickly as possible. My first thought was that a bomb had exploded,” he said.
Police cleared the area and set up barriers around the scene of the incident.
“We are still trying to determine who the attacker was, if the attack was carried by one or more people, and the number of injured,” Swedish security service (Säpo) spokesperson Nina Odermalm Schei said in a statement.
The chief of Säpo, Anders Thornberg, said they were investigating a “person of interest” in the case.
Authorities shut down Stockholm’s subway system and evacuated several shopping locations and the city’s main railway station after the incident.
On March 22, an extremist in London rammed a car through crowds on Westminster bridge and attacked police with a knife, killing five and injuring 50. Terrorists also carried out deadly attacks by driving vehicles into crowds in Berlin and Nice last year.
The company that owned the truck in the Stockholm incident, brewery company Spendrups, say the truck was stolen Thursday. “It’s one of our distribution vehicles which runs deliveries. During a delivery to the restaurant Caliente someone jumped into the driver’s cabin and drove off with the car, while the driver unloads,” company spokesperson Mårten Lyth told Swedish TT news outlet.
The incident took place in the same area as a 2010 suicide bomb attack during Christmas shopping season. Only the bomber died in that attack.
Stay tuned for updates to the story here.
This article was updated to include new developments on the story.
|
"""
ECDSA signature scheme.
Requires Python >= 2.4 (http://pypi.python.org/pypi/hashlib is needed for
python2.4).
"""
import hashlib
import random
import time
# Local import
try:
import wcurve
except ImportError:
from os.path import abspath, dirname
import sys
parent = dirname(dirname(abspath(__file__)))
sys.path.append(parent)
import wcurve
def _big_int_unpack_be(seq):
p = None
if isinstance(seq, str):
p = lambda x: ord(x)
else:
p = lambda x: x
return sum([p(seq[i]) << (i * 8) for i in range(len(seq) - 1, -1, -1)])
def generate_keypair(curve):
sk = random.SystemRandom().randint(1, curve.n - 1)
pk = sk * curve.base_point
pk.canonicalize() # needed for ephemeral key gen in sign()
return sk, pk
def sign(curve, secret_key, msg):
assert isinstance(curve, wcurve._Curve)
while True:
esk, epk = generate_keypair(curve)
r = epk.x % curve.n
if r == 0:
continue
e = _big_int_unpack_be(hashlib.sha256(msg.encode('utf8')).digest())
kinv = wcurve._FpArithmetic(curve.n).inverse(esk)
s = (kinv * (e + r * secret_key)) % curve.n
if s == 0:
continue
return r, s
def verify(pub_key, signature, msg):
if not isinstance(pub_key, wcurve.JacobianPoint):
return False
r, s = signature
curve = pub_key.curve
for v in signature:
if not (1 <= v <= (curve.n - 1)):
return False
e = _big_int_unpack_be(hashlib.sha256(msg.encode('utf8')).digest())
sinv = wcurve._FpArithmetic(curve.n).inverse(s)
u1 = e * sinv % curve.n
u2 = r * sinv % curve.n
q = u1 * curve.base_point + u2 * pub_key
if q.is_at_infinity():
return False
v = q.get_affine_x() % curve.n
if r == v:
return True
return False
def run(curve, tag):
sk, pk = generate_keypair(curve)
msg = "My message to sign"
# Signature
start = time.time()
sig = sign(curve, sk, msg)
sign_time = time.time() - start
# For signature verification there is no meaning of using infective
# computations in scalar multiplications.
if curve.infective:
pk.curve = wcurve.secp256r1_curve()
# Verification
start = time.time()
# /!\ in a real implementation the public key would most likely come
# from an untrusted remote party so it would then be required to check
# the validity of the public key before calling this function. That is
# instantiating the right curve, calling JacobianPoint.from_affine()
# or JacobianPoint.uncompress(), and calling JacobianPoint.is_valid().
valid = verify(pk, sig, msg)
verify_time = time.time() - start
print('%-25s: sign=%0.3fs verify=%0.3fs valid=%s' % \
(tag, sign_time, verify_time, valid))
if __name__ == '__main__':
run(wcurve.secp256r1_curve(), 'secp256r1')
run(wcurve.secp256r1_curve_infective(),
'secp256r1_curve_infective')
|
Argos, a leading retailer in the United Kingdom, has been reporting impressive results from the mobile commerce sector. The retailer has been working to become more engaging with mobile consumers in recent years, hoping to offer better services to those tethered to mobile devices. This has become quite common throughout the retail industry. Many companies are eager to engage consumers in new ways, embracing mobile commerce and new marketing initiatives in order to adapt to changes in technology and the shopping habits of consumers.
Argos has seen its mobile sales more than double during the first half of its fiscal year, rising by 16%. Approximately 43% of the entirety of the retailer’s sales were made online. While the majority of sales were made from traditional computers, sales made through tablets and smartphones grew by 124%. The retailer’s multichannel sales now account for nearly $1 billion, with mobile commerce making up a sizeable portion of that total.
The retailer is currently undergoing something of a digital transformation. Argos has taken note of the growing popularity of smartphones and tablets among shoppers and has been working to grow more accommodating to the needs of these consumers. In the future, many people are likely to shop from their mobile devices rather than visit physical stores. While this has caused some concern among retailers, companies like Argos are taking a proactive approach to the issue, establishing strong online presences in order to remain relevant with consumers.
Mobile commerce has proven to be beneficial to Argos, but other retailers have not fared as well in their mobile endeavors. One of the greatest challenges currently facing retailers has to do with their websites. Most retail sites are not optimized to be used from a mobile device. This leaves many consumers with a poor experience, making it unlikely for these consumers to participate in any mobile commerce ventures coming from these retailers in the future.
|
"""Proxy pattern
Proxy is a structural design pattern. A proxy is a surrogate object which can
communicate with the real object (aka implementation). Whenever a method in the
surrogate is called, the surrogate simply calls the corresponding method in
the implementation. The real object is encapsulated in the surrogate object when
the latter is instantiated. It's NOT mandatory that the real object class and
the surrogate object class share the same common interface.
"""
from abc import ABC, abstractmethod
class CommonInterface(ABC):
"""Common interface for Implementation (real obj) and Proxy (surrogate)."""
@abstractmethod
def load(self):
pass
@abstractmethod
def do_stuff(self):
pass
class Implementation(CommonInterface):
def __init__(self, filename):
self.filename = filename
def load(self):
print("load {}".format(self.filename))
def do_stuff(self):
print("do stuff on {}".format(self.filename))
class Proxy(CommonInterface):
def __init__(self, implementation):
self.__implementation = implementation # the real object
self.__cached = False
def load(self):
self.__implementation.load()
self.__cached = True
def do_stuff(self):
if not self.__cached:
self.load()
self.__implementation.do_stuff()
def main():
p1 = Proxy(Implementation("RealObject1"))
p2 = Proxy(Implementation("RealObject2"))
p1.do_stuff() # loading necessary
p1.do_stuff() # loading unnecessary (use cached object)
p2.do_stuff() # loading necessary
p2.do_stuff() # loading unnecessary (use cached object)
p1.do_stuff() # loading unnecessary (use cached object)
if __name__ == "__main__":
main()
|
Hello dear guys. I'm very sociable girl, I love active rest, adventures and travel. If you want know about me don't be shy, message me ) I will be glad make new friends and maybe if you will be right man we could create the family.
|
#!/usr/bin/python
import sys
from heapq import *
from optparse import OptionParser
def dprint(msg):
print "[%d]: %s" % (time, msg)
arglist = ("count", "size", "num_hops", "bw", "hop_mode", "conn", "uptime", "downtime")
usage = "usage: %prog [options]"
parser = OptionParser(usage)
parser.add_option('--size', type='int', help='size of each message')
parser.add_option('--count', type='int', help='number of messages')
parser.add_option('--num_hops', type='int', help='number of hops')
parser.add_option('--bw', type='int', help='bandwidth')
parser.add_option('--hop_mode', help='hop mode (hop or e2e)')
parser.add_option('--conn', help='connectivity mode')
parser.add_option('--uptime', type='int', help='uptime in seconds')
parser.add_option('--downtime', type='int', help='downtime in seconds')
parser.set_defaults(num_hops=5)
parser.set_defaults(uptime=60)
parser.set_defaults(downtime=240)
(opts, args) = parser.parse_args()
def die():
parser.print_help()
sys.exit(0)
if opts.count == None or \
opts.size == None or \
opts.num_hops == None or \
opts.bw == None or \
opts.hop_mode == None or \
opts.conn == None or \
opts.uptime == None or \
opts.downtime == None: die()
count = opts.count
size = opts.size
num_hops = opts.num_hops
bw = opts.bw
uptime = opts.uptime
downtime = opts.downtime
hop_mode = opts.hop_mode
conn = opts.conn
last = num_hops - 1
total_size = count * size * 8
amount = map(lambda x: 0, range(0, num_hops))
links = map(lambda x: True, range(0, num_hops))
amount[0] = total_size
q = []
class SimDoneEvent:
def __init__(self, time):
self.time = time
def run(self):
print "maximum simulation time (%d) reached... ending simulation" % self.time
sys.exit(1)
class LinkEvent:
def __init__(self, time, link, mode):
self.time = time
self.link = link
self.mode = mode
def __cmp__(self, other):
return self.time.__cmp__(other.time)
def __str__(self):
return "Event @%d: link %d %s" % (self.time, self.link, self.mode)
def run(self):
if self.mode == 'up':
dprint('opening link %d' % self.link)
links[self.link] = True
self.time += uptime
self.mode = 'down'
else:
dprint('closing link %d' % self.link)
links[self.link] = False
self.time += downtime
self.mode = 'up'
queue_event(self)
class CompletedEvent:
def __init__(self, time, node):
self.time = time
self.node = node
def run(self):
pass
def queue_event(e):
global q
# dprint('queuing event %s' % e)
heappush(q, e)
# simulator completion event
time = 0
queue_event(SimDoneEvent(60*30))
# initial link events
if (conn == 'conn'):
pass
elif (conn == 'all2'):
for i in range(1, num_hops):
queue_event(LinkEvent(uptime, i, 'down'))
elif (conn == 'sequential'):
queue_event(LinkEvent(uptime, 1, 'down'))
for i in range(2, num_hops):
links[i] = False
queue_event(LinkEvent((i-1) * 60, i, 'up'))
elif (conn == 'offset2'):
for i in range (1, num_hops):
if i % 2 == 0:
links[i] = False
queue_event(LinkEvent(120, i, 'up'))
else:
queue_event(LinkEvent(uptime, i, 'down'))
elif (conn == 'shift10'):
if num_hops * 10 > 60:
raise(ValueError("can't handle more than 6 hops"))
queue_event(LinkEvent(uptime, 1, 'down'))
for i in range (2, num_hops):
links[i] = False
queue_event(LinkEvent(10 * (i-1), i, 'up'))
else:
raise(ValueError("conn mode %s not defined" % conn))
print 'initial link states:'
for i in range(0, num_hops):
print '\t%d: %s' % (i, links[i])
def can_move(i):
if hop_mode == 'hop':
dest = i+1
hops = (i+1,)
else:
dest = last
hops = range(i+1, last+1)
for j in hops:
if links[j] != True:
# dprint("can't move data from %d to %d since link %d closed" % (i, dest, j))
return False
return True
# proc to shuffle a given amount of data through the network
def move_data(interval):
dprint('%d seconds elapsed... trying to move data' % interval)
for i in range(0, last):
if not can_move(i):
continue
if hop_mode == 'hop':
dest = i+1
else:
dest = last
amt = min(amount[i], interval * bw)
if (amt != 0):
dprint('moving %d/%d bits (%d msgs) from %d to %d' %
(amt, amount[i], amt / (size*8), i, dest))
amount[i] -= amt
amount[dest] += amt
if dest == last and amount[dest] == total_size:
print "all data transferred..."
print "ELAPSED %d" % (time + interval)
sys.exit(0)
def blocked():
for i in range(0, last):
if can_move(i):
return False
return True
def completion_time():
# if nothing can move, then we have infinite completion time
if blocked():
return 9999999999.0
return float(sum(amount[:-1])) / float(bw)
while True:
try:
next_event = heappop(q)
except:
raise RuntimeError('no events in queue but not complete')
tcomplete = completion_time()
elapsed = next_event.time - time
if (tcomplete < elapsed):
dprint('trying to move last chunk')
move_data(tcomplete)
time = next_event.time
if (elapsed != 0 and not blocked()):
move_data(elapsed)
next_event.run()
|
The rise of Android phones in India has been phenomenal with over 600% YOY growth in the year 2011.Today they hold a comfortable 34% of the Indian smartphone market behind only the (now aging) Symbian OS. Even as analysts and experts have predicted Android phones to double their market share in 2012 and as Samsung , the biggest proponent of Android grows in India , one can expect greater things in the year 2012. Smartphones have grown phenomenally in the year 2011 with a YOY growth of 87%.
It is about time for the average Indian to move from the feature phone to the Smartphones and I believe there are 3 strong reasons why Android would not just grow at an increasing rate but in fact Break out with exponential growth.
The growth of cellphones happened about a decade ago thanks to the availability of cheaper mobile phones with attractive features. Today mobile phones are available at prices as low as $15- $20 Dollars or roughly Rs 1000 in India.
These phones and the proliferation of the communication network all over India has helped in the growth of mobile phone industry. It is no surprise then that Sunil Mittal, CEO of Airtel, India’s largest private telecommunications company urged for the need of super cheap smartphones. Mittal addressed the World Mobile Congress in Barcelona declaring it is time for GSMA, a consortium of hundreds mobile operators to trigger the adoption of cheaper smartphones at prices of less than $100 by placing orders of at least 100 million models. Mittal’s statement echoes that of Google India Head Rajan Anandan, who has also stated that for smartphones to break into 200 million barrier, greater price reduction to the order of 30% is needed. The figure Rajan comes up with not surprisingly is also Rs 5000.
There are currently a few products in the market from domestic players that fall below the Rs 5000 barrier. However, most of these run on an outdated version of Android and feature sub standard hardware. Among main brand manufacturers, Samsung is perhaps the closest to this barrier with Samsung Galaxy Y price already at about 7k rupees.
We may expect a price drop soon as more models of Samsung are launched. Samsung will also be launching Samsung Galaxy Pocket at a price point lower than Samsung Galaxy Y which could effectively help Android breach the $100 barrier in a big way.
India is the largest adopter of Dual Sim Phones in the world. While it started as a quirky feature in few local brands, the phenomena caught on like wild fire. Varying call rates between operators, the need to separate work and personal life or even perhaps the need to travel between 2 states to work or study, had always forced several average mobile phone user to carry 2 cellphones. Dual Sims solved this need and are the norm today.
With over 58% market share, you would be hard pressed to find a person who hasn’t owned a dual sim phone at some stage in his/her life. Android phones and other smartphones didn’t have access to this 58% market. The lack of smartphones that had dual sims had always kept a good chunk of consumers from moving from their dual sim feature phone handsets. Samsung recently launched its own range of Dual Sim Androids to capitalize on this innate need. Samsung has launched Samsung Galaxy Y Duos, With the dual sim variants for its popular Androids, Samsung Galaxy Y and Galaxy Ace, Samsung will now be leading the charge along with few domestic makers to convince the Dual Sim feature phone user to migrate to smartphone.
For a feature phone user, the general perception of a smartphone has been an expensive phone that lets you play great looking games and takes great looking photographs. However recent spur of promotion of Android Applications by different utility companies, websites and cellphone manufacturers have changed this perception.
Be it apps for banking, booking tickets, e-commerce or newspapers there is an Android App that caters to just about any of your personal needs. Google Play as of today features over 1000 free Indian Apps for every perceivable need, be it Book My Show for booking your movie tickets, HDFC’s Banking App for banking, Flipkart’s Android App to shop or Zomato to find a good place to eat.
With more aggressive Promotion by mobile manufacturers and service providers, we can expect more feature phone users stand up and take notice of the smartphone revolution. Smartphones have long established their utility for navigation, instant search and staying constantly in touch with friends. The proliferation of Android Apps will definitely help the growth of Android in the year 2012.
At the moment however, few public service utilities are available on the Android and will prove to be a great boost for the growth of Android. The Irctc App lets you book train tickets and view status on your reservation.
While certain state governments such as Andhra Pradesh and Karnataka have made their electronic bill payment and utility service portals available on Android as Apps too. We can expect these public utility companies to also promote their Android Apps more actively and help create awareness of the limitless opportunities presented by a smartphone.
The rise of Android in India seems inevitable at the moment. With increasing net additions, lowering price points and the availability of more India centric utility Apps, Android has caught the imagination of just about every average Indian. There would be no surprise that 2012 will witness the break out of Android thanks to the combined effort of mobile manufacturers such as Samsung coming out with cheaper models , Developers coming out with more utilitarian apps and aggressive growth of the telecommunication networks in the country.
The future of Android is bright in the year 2012.
Sitakanta is co-founder of web startup - MySmartPrice.com. He is an alumnus of IIM-Bangalore and has avid interests in Latest Tech & Mobile trends.
Women bosses in Indian corporates: What is driving their growth?
Is Relaunch of Android One Project on Sunder Pichai’s Agenda During India Visit?
By providing an open research and development platform, Android offers utility developers the ability to build extremely rich & innovative applications. Developers are free to take advantage of the device hardware, access location information, run background services, add notifications to the status bar,set alarms, and much, much more.recently i came across this website which provides additional applications like check your eyes and so on… dis product also provides you a reward points just for entering into the shops in which they have tie ups … its really very interesting to came across such start ups..
Interesting article Sir. More innovative mobile utility apps are getting into the India smartphone market especially in Os like Android and Blackberry. Recently I was impressed by Innovative MintM app, in Android phone, converging Loyalty and m-Commerce together. Great value and interesting to use it as I’m an active shopper.. Freely downloadable at..
Free innovative utilitiies like these, provided by Androids increases the usage of smartphones to a large extent.
Do you believe Android’s phenomenal growth will NOT have a detrimental effect on the quality of apps available for the platform?
|
import requests
import tarfile,sys
import urllib2
import json
import time
import pymongo
from itertools import islice
def run_mirbase_download():
load_mirbase_list(0)
return 0
def get_mir_data(mirna):
client = pymongo.MongoClient()
mirna_data = {
'results': []
}
mirnaarray = mirna.split(',')
for mirna_item in mirnaarray :
mystr = "";
terms = list(client.dataset.mirbase.find({'mId': mirna_item}))
if(len(terms) > 0):
mirna_data['results'].append({
'id': mirna_item,
'information': terms[0]['mirna_information']
})
return mirna_data
def get_mir_name_converter(term_id):
client = pymongo.MongoClient()
terms = list(client.dataset.mirbase.find({'mId': term_id}))
if(len(terms) > 0):
return terms[0]['mirna_id']
else:
return "UNKNOWN"
def get_mirbase_info(mirna_id): # EXT
mir_resolved_id = get_mir_name_converter(mirna_id)
if(mir_resolved_id is not "UNKNOWN"):
url = 'http://mygene.info/v2/query?q=' + mir_resolved_id
r = requests.get(url)
r_json = r.json()
if 'hits' in r_json and len(r_json['hits']) > 0:
entrezgene_id = r_json['hits'][0]['entrezgene']
url2 = 'http://mygene.info/v2/gene/' + str(entrezgene_id)
r2 = requests.get(url2)
r2_json = r2.json()
return r2_json
return r
else:
return "UNKNOWN TERM"
def load_mirbase_list(file_batch_number):
url = 'http://ec2-54-148-99-18.us-west-2.compute.amazonaws.com:9200/_plugin/head/mirna.txt'
r = requests.get(url)
lines = r.iter_lines()
def parse(lines):
for line in lines:
try:
c1, mirna_id, mId, c2, c3, c4, mirna_information, c5 = line.split('\t')
yield {
'mirna_id': mirna_id,
'mId': mId,
'mirna_information': mirna_information
}
except Exception as e:
warningLabel = e.message
db = pymongo.MongoClient().dataset
collection = db.mirbase
collection.drop()
count = 0
iterator = parse(lines)
while True:
records = [record for record in islice(iterator, 1000)]
if len(records) > 0:
count += len(collection.insert_many(records).inserted_ids)
else:
break
collection.create_indexes([
pymongo.IndexModel([('mirna_id', pymongo.ASCENDING)]),
pymongo.IndexModel([('mId', pymongo.ASCENDING)])
])
def main():
return 0
if __name__ == '__main__':
sys.exit(main())
|
The region of Campania has been known as a paradise on earth from ancient times on. The Greeks built some of their most impressive temples here and under the Romans it became known as �Campania Felix� or the �Happy Land�. When travelling the region, one becomes overwhelmed by its wealth of cultural and natural attractions, from the great city of Naples to the well-known Costiera Amalfitana, with its towering cliffs and picturesque coves.
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# Copyright 2010 C Sommer, C Straehle, U Koethe, FA Hamprecht. All rights reserved.
#
# Redistribution and use in source and binary forms, with or without modification, are
# permitted provided that the following conditions are met:
#
# 1. Redistributions of source code must retain the above copyright notice, this list of
# conditions and the following disclaimer.
#
# 2. Redistributions in binary form must reproduce the above copyright notice, this list
# of conditions and the following disclaimer in the documentation and/or other materials
# provided with the distribution.
#
# THIS SOFTWARE IS PROVIDED BY THE ABOVE COPYRIGHT HOLDERS ``AS IS'' AND ANY EXPRESS OR IMPLIED
# WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND
# FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE ABOVE COPYRIGHT HOLDERS OR
# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
# SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
# ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
# NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF
# ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
#
# The views and conclusions contained in the software and documentation are those of the
# authors and should not be interpreted as representing official policies, either expressed
# or implied, of their employers.
"""
Watershed iterative segmentation plugin
"""
from segmentorBase import *
from enthought.traits.api import Float, Int
from enthought.traits.ui.api import View, Item
#from segmentorWSit import SegmentorWSiter
ok = False
try:
import vigra.tws
ok = True
except Exception, e:
pass
if 0:
#*******************************************************************************
# S e g m e n t o r S V 2 *
#*******************************************************************************
class SegmentorSV2(SegmentorBase):
name = "Supervoxel Segmentation 2"
description = "Segmentation plugin using sparse Basin graph"
author = "HCI, University of Heidelberg"
homepage = "http://hci.iwr.uni-heidelberg.de"
bias = Float(64*8)
biasedLabel = Int(1)
maxHeight = Float(1024)
view = View( Item('bias'), Item('maxHeight'), Item('biasedLabel'), buttons = ['OK', 'Cancel'], )
#*******************************************************************************
# I n d e x e d A c c e s s o r *
#*******************************************************************************
class IndexedAccessor:
"""
Helper class that behaves like an ndarray, but does a Lookuptable access
"""
def __init__(self, volumeBasins, basinLabels):
self.volumeBasins = volumeBasins
self.basinLabels = basinLabels
self.dtype = basinLabels.dtype
self.shape = volumeBasins.shape
def __getitem__(self, key):
return self.basinLabels[self.volumeBasins[tuple(key)]]
def __setitem__(self, key, data):
#self.data[tuple(key)] = data
print "##########ERROR ######### : SegmentationDataAccessor setitem should not be called"
def segment3D(self, labelVolume, labelValues, labelIndices):
self.ws.setBias(self.bias, self.biasedLabel, self.maxHeight)
self.basinLabels = self.ws.flood(labelValues, labelIndices)
self.acc = SegmentorWSiter.IndexedAccessor(self.volumeBasins, self.basinLabels)
return self.acc
def segment2D(self, labels):
#TODO: implement
return labelVolume
def setupWeights(self, weights):
print "Incoming weights :", weights.shape
#self.weights = numpy.average(weights, axis = 3).astype(numpy.uint8)#.swapaxes(0,2).view(vigra.ScalarVolume)#
if weights.dtype != numpy.uint8:
print "converting weights to uint8"
self.weights = weights.astype(numpy.uint8)
# self.weights = numpy.zeros(weights.shape[0:-1], 'uint8')
# self.weights[:] = 3
# self.weights[:,:,0::4] = 10
# self.weights[:,0::4,:] = 10
# self.weights[0::4,:,:] = 10
# self.weights = self.weights
self.ws = vigra.tws.IncrementalWS2(self.weights)
self.volumeBasins = self.ws.getVolumeBasins() #WithBorders()
print "Outgoing weights :", self.volumeBasins.shape
self.volumeBasins.shape = self.volumeBasins.shape + (1,)
|
wow. life. it comes at you fast and you have to face it. why? sometimes things happen and you aren’t sure why.
|
# This file is part of ReText
# Copyright: 2014 Dmitry Shachnev
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation; either version 2 of the License, or
# (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import unittest
from ReText.editor import documentIndentMore, documentIndentLess
from PyQt5.QtGui import QTextCursor, QTextDocument
class SettingsMock:
tabWidth = 4
tabInsertsSpaces = True
class TestIndentation(unittest.TestCase):
def setUp(self):
self.document = QTextDocument()
self.document.setPlainText('foo\nbar\nbaz')
self.settings = SettingsMock()
def test_indentMore(self):
cursor = QTextCursor(self.document)
cursor.setPosition(4)
documentIndentMore(self.document, cursor, self.settings)
self.assertEqual('foo\n bar\nbaz',
self.document.toPlainText())
cursor.setPosition(3)
documentIndentMore(self.document, cursor, self.settings)
self.assertEqual('foo \n bar\nbaz',
self.document.toPlainText())
def test_indentMoreWithTabs(self):
cursor = QTextCursor(self.document)
self.settings.tabInsertsSpaces = False
documentIndentMore(self.document, cursor, self.settings)
self.assertEqual('\tfoo\nbar\nbaz', self.document.toPlainText())
def test_indentMoreWithSelection(self):
cursor = QTextCursor(self.document)
cursor.setPosition(1)
cursor.setPosition(6, QTextCursor.KeepAnchor)
self.assertEqual('oo\u2029ba', # \u2029 is paragraph separator
cursor.selectedText())
documentIndentMore(self.document, cursor, self.settings)
self.assertEqual(' foo\n bar\nbaz',
self.document.toPlainText())
def test_indentLess(self):
self.document.setPlainText(' foo')
cursor = QTextCursor(self.document)
cursor.setPosition(10)
documentIndentLess(self.document, cursor, self.settings)
self.assertEqual(' foo', self.document.toPlainText())
documentIndentLess(self.document, cursor, self.settings)
self.assertEqual('foo', self.document.toPlainText())
def test_indentLessWithSelection(self):
self.document.setPlainText(' foo\n bar\nbaz')
cursor = QTextCursor(self.document)
cursor.setPosition(5)
cursor.setPosition(11, QTextCursor.KeepAnchor)
documentIndentLess(self.document, cursor, self.settings)
self.assertEqual('foo\nbar\nbaz', self.document.toPlainText())
if __name__ == '__main__':
unittest.main()
|
STC, together with UXPA, has a unique, three-day online event lined up for you September 24, 25, and 26. For two hours on each of those three days, you will follow presentations on many aspects of accessibility and usability and learn how they fit into the bigger picture of technical communication.
The current roster of speakers has some great names.
Visit the registration page at STC.org now for pricing and … registration!
|
from unittest import TestCase
import website.query
import pandas as pd
import json
class TestLimits(TestCase):
def test_adding_simple_limits(self):
query = website.query.Query(
query_text="select * from some_table",
db=1)
query.add_limit()
self.assertEqual(
query.query_text,
"select * from some_table limit 1000;")
def test_semicolon_limits(self):
query = website.query.Query(
query_text="select * from some_table;",
db=1)
query.add_limit()
self.assertEqual(
query.query_text,
"select * from some_table limit 1000;")
def test_limit_already_exists(self):
query = website.query.Query(
query_text="select * from some_table limit 10",
db=1)
query.add_limit()
self.assertEqual(
query.query_text,
"select * from some_table limit 10")
def test_limit_semicolon_already_exists(self):
query = website.query.Query(
query_text="select * from some_table limit 10;",
db=1)
query.add_limit()
self.assertEqual(
query.query_text,
"select * from some_table limit 10;")
class TestSafety(TestCase):
def stop_words(self):
base_query = "select * from some_table"
stop_words = ['insert', 'delete', 'drop',
'truncate', 'alter', 'grant']
for word in stop_words:
query = website.query.Query(
query_text="%s %s " % (word, base_query),
db=1)
self.assertRaises(TypeError, query.check_safety)
class TestManipulateData(TestCase):
"""def test_numericalize_data_array(self):
md = website.query.ManipulateData(
query_text='',
db='')
md.data_array = [['a', '3', '4.0', '2014-01-02']]
return_array = md.numericalize_data_array()
self.assertListEqual(return_array, [['a', 3, 4.0, '2014-01-02']])
"""
def test_pivot(self):
md = website.query.ManipulateData(
query_text='',
db='')
test_data = {
'col1': ['cat', 'dog', 'cat', 'bear'],
'col2': ['summer', 'summer', 'winter', 'winter'],
'val': [1, 2, 3, 4]}
md.data = pd.DataFrame(test_data)
return_data = json.loads(md.pivot().to_json())
self.assertDictEqual(
return_data,
{
"col1": {"0": "bear", "1": "cat", "2": "dog"},
"summer": {"0": 0.0, "1": 1.0, "2": 2.0},
"winter": {"0": 4.0, "1": 3.0, "2": 0.0}
})
|
The Alfred Hitchcock Ultimate Collection is a must have set for any Alfred Hitchcock lover or someone who loves the classic action films! This set includes 15 movies and 10 of the Alfred Hitchcock TV shows. This DVD collection is available for only $69.95!
|
# -*- coding: utf-8 -*-
from arshidni.models import GraduateProfile, Question, Answer, StudyGroup, LearningObjective, JoinStudyGroupRequest, ColleagueProfile, SupervisionRequest
from django import forms
class GraduateProfileForm(forms.ModelForm):
class Meta:
model = GraduateProfile
fields = ['contacts', 'bio', 'interests',
'answers_questions', 'gives_lectures']
class QuestionForm(forms.ModelForm):
class Meta:
model = Question
fields = ['text']
class AnswerForm(forms.ModelForm):
class Meta:
model = Answer
fields = ['text']
class StudyGroupForm(forms.ModelForm):
class Meta:
model = StudyGroup
fields = ['name', 'starting_date', 'ending_date',
'max_members']
def clean(self):
cleaned_data = super(StudyGroupForm, self).clean()
if 'starting_date' in cleaned_data and 'ending_date' in cleaned_data:
if cleaned_data['starting_date'] > cleaned_data['ending_date']:
msg = u'تاريخ انتهاء المدة قبل تاريخ بدئها!'
self._errors["starting_date"] = self.error_class([msg])
self._errors["ending_date"] = self.error_class([msg])
# Remove invalid fields
del cleaned_data["starting_date"]
del cleaned_data["ending_date"]
new_learningobjective_fields = [field for field in self.data if field.startswith('new_learningobjective-')]
existing_learningobjective_fields = [field for field in self.data if field.startswith('existing_learningobjective-')]
for field_name in new_learningobjective_fields:
text = self.data[field_name].strip()
if not text: # if empty
continue
cleaned_data[field_name] = self.data[field_name]
for field_name in existing_learningobjective_fields:
text = self.data[field_name].strip()
if not text: # if empty
continue
cleaned_data[field_name] = self.data[field_name]
return cleaned_data
def clean_max_members(self):
"Define max_members range."
# TODO: Move this hard-coded number into a Django setting.
# The maximum number of students in each group is 8.
max_members = self.cleaned_data["max_members"]
if max_members > 8:
msg = u'لا يمكن أن يكون عدد أعضاء المجموعة أكثر من 8!'
self._errors["max_members"] = self.error_class([msg])
elif max_members < 3:
msg = u'لا يمكن أن يكون عدد أعضاء المجموعة أقل من 3!'
self._errors["max_members"] = self.error_class([msg])
return max_members
def save(self, *args, **kwargs):
group = super(StudyGroupForm, self).save(*args, **kwargs)
remaining_pk = [] # List of kept learning objects (whether
# modified or not)
new_learningobjective_fields = [field for field in self.cleaned_data if field.startswith('new_learningobjective-')]
existing_learningobjective_fields = [field for field in self.cleaned_data if field.startswith('existing_learningobjective-')]
for field_name in new_learningobjective_fields:
text = self.cleaned_data[field_name]
new_learningobjective = LearningObjective.objects.create(group=group,text=text)
remaining_pk.append(new_learningobjective.pk)
for field_name in existing_learningobjective_fields:
pk_str = field_name.lstrip("existing_learningobjective-")
pk = int(pk_str)
remaining_pk.append(pk)
text = self.cleaned_data[field_name]
existing_learningobjective = LearningObjective.objects.get(pk=pk)
existing_learningobjective.text = text
existing_learningobjective.save()
deleted_learningobjectives = LearningObjective.objects.exclude(pk__in=remaining_pk).filter(group=group)
for deleted_learningobjective in deleted_learningobjectives:
print "Deleting", deleted_learningobjective.text
deleted_learningobjective.delete()
return group
class ColleagueProfileForm(forms.ModelForm):
class Meta:
model = ColleagueProfile
fields = ['batch', 'contacts', 'bio', 'interests', 'tags']
class SupervisionRequestForm(forms.ModelForm):
class Meta:
model = SupervisionRequest
fields = ['batch', 'contacts', 'interests']
|
Achieve Perfection by Redefining Perfect!
I spent a good portion of my life trying to achieve perfection in one form or another. But all of that changed one day when I found myself waiting for a friend in a quiet Irish Pub on a Sunday afternoon.
See, this was not your average run-of-the-mill Irish Pub, but a “Thank God I’m Irish-American Pub,” with pictures of presidents and historical documents lining the walls. And as I waited for my friend, I noticed a large copy of the United States Constitution hanging on the wall in front of me.
He nodded back with a slight grin on his face, took a sip of his whiskey and settled back into his morning paper as if we had never had this conversation. I, in turn, read the remainder of the Constitution over a second pint, paid my tab and thanked the old man once again for the civics lesson.
My friend never did show up, but I was now able to see the perfection in that.
On one of the few days it rained in LA, I decided to take the Number 2 bus to meet a friend. By the time the bus came, I was completely soaked. The newspaper over my head and the palm tree above were no match for mother nature.
As I stepped into the damp and musty bus, I felt the tension of the standing-room only crowd. I paid my fare and found a place to stand next to a well-dressed, old man that was seated comfortably with his eyes closed as he held a cane in one hand and an umbrella in the other, making him the only dry passenger. At the next abrupt stop he opened his eyes, sat up and looked to the front of the bus.
“We’re at Genesee,” I said. He nodded and continued to look forward.
Nothing was said by them or anyone else on the bus until a few stops later, when the elderly gentlemen pulled the stop cord and proceeded to the front exit. All eyes were on him. The entire bus was silent. He turned slowly to look back at the elderly woman and nodded. She nodded back, blushed and looked down to find his umbrella at her side.
“Oh, you forgot your umbrella.” she said.
He smiled and said, “No, I didn’t…enjoy” as he proceeded out of the bus and into the rain, welcoming each and every drop.
Think about it. What is truly yours? What did you come into this world with? What will you leave with? What is not transient? What remains without change?
What thoughts are uniquely yours? Aren’t many of your thoughts given to you by others? Could you consider giving up thoughts that don’t serve you and sharing those that do?
Are your habits your own or did you learn them from others? Could you let go of the bad habits and share the good ones?
Could you begin to let go of your attachment to things? Could learn to share those things that serve you and let go of those things that don’t?
Could you realize how much energy it takes to possess things? Could you better use that energy to live more fully in the present?
Could you begin to see the gift of life you are and share it with others?
Over time you may realize little is ever really yours, and that what matters is rarely matter.
Six Degrees of Manifestation refers to the fact that you are a mere six degrees or steps away from manifesting anything you want in your life.
1. Be Present. The first step is to become present once again. Once again? That’s right. When you were a baby you were nothing but present. Somewhere along the way you chose to live a good part of your life in the past and future. You become present again through awareness and the exercises found throughout iwishicouldtellyou.com.
2. Pay Attention – Now that you’re present, you need to pay attention to two wonderful things, you and the world around you. Sound’s simple? Think again. That’s a lot of stuff. Here’s the trick. Stay present, observe and listen to your gut. In a present state you are more likely to determine what is best for you and worthy of your attention.
3. Place Your Intention – Placing your intention is when things really get exciting. Placing intention on a thing increases your attention on it and your connection to it. When you place attention you are literally saying “I intend on experiencing ____________.” The more you focus on something the greater the gravitational pull you will have to it and it to you. Some folks call this attraction.
5. Permit – Now that you have followed your heart into the world of possibilities, you must allow yourself or rather give yourself permission to have what you payed attention to, place your intention on and perceived into your world. This is the most challenging part for most people. This is the moment of truth. This is where we find out if what we wanted is what we truly wanted. Well, you’ll never know until you allow it into your life. If it’s not what you want you can always start over and manifest something else. So permit/allow and embrace that which you intended.
6. Participate, Play and Pass it on – Last but not least, experience what you have manifested fully and share it with others.
Traveling at the Speed of Sight!
Change your speed, change your life!
Let’s face it. Most of us have a tendency to travel throughout our days at the same relative speed. At this relative speed, we see the same things we’ve always seen and process them in the same way we’ve always processed them. But what if we changed our speed from time to time. What if we stopped, slowed down and/or sped up? Could we dramatically change our lives?
How did I come to this conclusion? Well, simply put, I had food poisoning. Something I wouldn’t wish on anyone but something that helped me understand that the speed at which I moved through my life affected how I saw things. And as my perception changed, so did my world.
A perfect example of this came somewhere in the middle of my recovery, when I was finally able to get up and take a walk outside. As I stepped outside for the first time in days, I saw a hummingbird out of the corner of my eye by the fruit tree outside my house. Without any reservation I mustered enough strength to slowly walk over to the tree and watch him. Before I knew it I was within arms length of him. I stood there for some time watching him gracefully extract nectar out of each blossom. When he was done he pulled slightly back from the tree, turned toward me, looked right into my eyes, hovered for a timeless moment, and then casually flew off.
Would I have been able to see or experience anything like this traveling at my normal speed?
Could I alter my speed throughout my day and get better results?
Could I travel at a speed more conducive to what matters most to me?
Am I controlling the speed of my life or am I letting other’s dictate my speed?
What speed brings me the most peace?
Is there a speed of least resistance?
Could I meet you at your speed?
This is a snap shot into a process I call TEST STRIPS.
TEST STRIPS are a quick way to evaluate where you are in the present moment in relation to where you aspire to be.
1.Choose a aspect of your life you wish to focus on and acknowledge its opposite.
2.Place the two opposing ideas across from each other on the strip like the one above.
3. Throughout the week, day or during a particular task ask yourself a question like this. From 10-1, how much ____________(The concept you wish to have) am I experiencing? From that number you will be able to determine where you are in relation to what you wish to create.
4. Decide/Focus on/Intend to move closer to your desired state or experience. (Note: Never focus on where you don’t what to be because what you focus on expands in your life.) Create intentions that state, “I wish to be more/experience more __________, and mantras that declare, “I am _________(that which you aspire to create more of)” and recite in throughout the day.
5. Ask yourself leading questions. What could I do in this moment to become more ___________? What new thoughts could I think to become more________? What thoughts/ideas could I let go of that may be holding me back from attaining more ________?
6. Think and act according to your intention and repeat this exercise regularly.
Here are few of the general concepts that my clients have created.
What aspect of your life would you like to work on this week?
|
#!/usr/bin/env python
# -*- coding: utf-8 -*-
#
# Copyright (C) 2016 Francesco Lumachi <francesco.lumachi@gmail.com>
from __future__ import division
from gensim import corpora, utils
from six import iteritems, itervalues, iterkeys
from collections import defaultdict
from itertools import tee, izip
from scipy.stats import rv_discrete
class Bidictionary(utils.SaveLoad):
# TODO not completely implemented all methods as a gensim dictionary!
"""
This object provide a convenient way to parse document (seen as list of tokens
in appearance order!) while extracting bigrams frequencies, returning gensim-like
"bag-of-bigrams" vector, which in features (bi-tokens) are identified by a tuple
(firstId, secondId).
Ex: bd = Bidictionary(documents=<some_corpus>)
bd[<token_id>] # return id->plain_token (from corpora.Dictionary)
bd.token2id[<token>] # return plain_token->id (from corpora.Dictionary)
bd.fid_sid2id[<firstId>, <secondId>] # return tokenid, tokenid -> bitokenid
bd.dfs[bitokenid] # return document frequency of bitoken
"""
def __init__(self, documents=None, prune_at=2000000, doc_start='^', doc_end='$'):
"""
Choose doc_start, doc_end to some char ignore by tokenizer, otherwise statistics
about start/end token will be compromised.
"""
self._unidict = corpora.Dictionary()
# add dummy doc to map start/end chars (will produce len(unidict)+1: nevermind)
self._unidict.doc2bow([doc_start, doc_end], allow_update=True)
self.doc_start, self.doc_end = doc_start, doc_end
# Statistics gensim-like
self.fid_sid2bid = {} # (firstid, secondid) -> tokenId
self.bid2fid_sid = {} # TODO: reverse mapping for fid_sid2bid; only formed on request, to save memory
self.dfs = {} # document frequencies: tokensId -> in how many documents those tokens appeared
self.num_pos = 0 # total number of corpus positions
self.num_nnz = 0 # total number of non-zeroes in the BOW matrix
if documents is not None:
self.add_documents(documents, prune_at=prune_at)
num_docs = property(lambda self: self._unidict.num_docs - 1) # 1 is the dummy doc ['^','$']
def doc2bob(self, document, allow_update=False, return_missing=False):
""" Document tokens are parsed pairwise to produce bag-of-bitokens features """
positional_doc = [self.doc_start] + document + [self.doc_end]
# Index single tokens
self._unidict.doc2bow(positional_doc, allow_update, return_missing)
# Construct ((firstid, secondid), frequency) mapping.
d1, d2 = tee(positional_doc)
next(d2, None) # step ahead second iterator
counter = defaultdict(int)
for first, second in izip(d1, d2):
# saving space using same indexes as unidict
try:
firstid = self._unidict.token2id[first]
secondid = self._unidict.token2id[second]
counter[firstid, secondid] += 1
except KeyError: # 1 or both token aren't indexed: skip.
continue
fid_sid2bid = self.fid_sid2bid
if allow_update or return_missing:
missing = dict((f_s, freq) for f_s, freq in iteritems(counter) if f_s not in fid_sid2bid)
if allow_update:
for w in missing:
# new id = number of ids made so far;
# NOTE this assumes there are no gaps in the id sequence!
fid_sid2bid[w] = len(fid_sid2bid)
result = dict((fid_sid2bid[w], freq) for w, freq in iteritems(counter) if w in fid_sid2bid)
if allow_update:
self.num_pos += sum(itervalues(counter))
self.num_nnz += len(result)
# increase document count for each unique token that appeared in the document
dfs = self.dfs
for bid in iterkeys(result):
dfs[bid] = dfs.get(bid, 0) + 1
# return tokensids, in ascending id order
result = sorted(iteritems(result))
if return_missing:
return result, missing
else:
return result
def add_documents(self, docs, prune_at=2000000):
for d in docs:
self.doc2bob(d, allow_update=True)
def tokens2bid(self, tokens):
"""
:param tokens: need to be a tuple ('a','b')
"""
fid, sid = self._unidict.token2id[tokens[0]], self._unidict.token2id[tokens[1]]
return self.fid_sid2bid[(fid, sid)]
def __getitem__(self, ids):
# If you want the frequency, you need to ask for a "bid" and then to self.dfs[bid]
if isinstance(ids, int): return self._unidict.__getitem__(ids) # tid -> 'token'
if isinstance(ids, str): return self._unidict.token2id[ids] # 'token' -> id
if isinstance(ids, tuple):
if isinstance(ids[0], int): return self.fid_sid2bid[ids] # fid, sid -> bid
if isinstance(ids[0], str): return self.tokens2bid(ids) # 'a', 'b' -> bid
@staticmethod
def load_from_text(fname):
return super(Bidictionary, fname).load_from_text(fname)
def save_as_text(self, fname, sort_by_word=True):
"""
Save this Dictionary to a text file, in format:
`id[TAB]fid[TAB]sid[TAB]document frequency[NEWLINE]`
and _unidict has an usual gensim dictionary
"""
self._unidict.save_as_text(fname + '.index', sort_by_word)
with utils.smart_open(fname, 'wb') as fout:
# no word to display in bidict
for fid_sid, id in sorted(iteritems(self.fid_sid2bid)):
line = "%i\t%i\t%i\t%i\n" % (id, fid_sid[0], fid_sid[1], self.dfs.get(id, 0))
fout.write(utils.to_utf8(line))
@staticmethod
def load_from_text(fname):
"""
Load a previously stored Dictionary from a text file.
Mirror function to `save_as_text`.
"""
result = Bidictionary()
# restore _unidict as gensim dictionary
result._unidict = corpora.Dictionary.load_from_text(fname + '.index')
with utils.smart_open(fname) as f:
for lineno, line in enumerate(f):
line = utils.to_unicode(line)
try:
bid, fid, sid, docfreq = line[:-1].split('\t')
fid_sid = (int(fid), int(sid))
except Exception:
raise ValueError("invalid line in dictionary file %s: %s"
% (fname, line.strip()))
bid = int(bid)
if fid_sid in result.fid_sid2bid:
raise KeyError('token %s is defined as ID %d and as ID %d' % (fid_sid, bid, result.fid_sid2bid[fid_sid]))
result.fid_sid2bid[fid_sid] = bid
result.dfs[bid] = int(docfreq)
return result
def mle(self, estimated, given):
""" Compute Maximum Likelihood Estimation probability
to extract the second token given first. """
try:
firstid = self._unidict.token2id[given]
secondid = self._unidict.token2id[estimated]
return self.dfs[self.fid_sid2bid[firstid, secondid]] / self._unidict.dfs[firstid]
except KeyError:
return 0.0
def mlebyids(self, estimated, given):
""" Compute Maximum Likelihood Estimation probability
to extract the second token id given first id. """
try:
return self.dfs[self.fid_sid2bid[given, estimated]] / self._unidict.dfs[given]
except KeyError:
return 0.0
def generate_text(self, seed, n):
""" Given a seed token, produce n likelihood tokens to follow. """
def nexttokenid(seedid):
candidates = [sid for fid, sid in self.fid_sid2bid.keys() if fid == seedid]
if len(candidates) == 0: raise StopIteration
probs = [self.mlebyids(probid, seedid) for probid in candidates]
return rv_discrete(values=(candidates, probs)).rvs()
seedid = self._unidict.token2id[seed]
text = [seed]
for n in range(0, n):
try:
seedid = nexttokenid(seedid)
text.append(self._unidict[seedid])
except StopIteration:
break
return ' '.join(text)
|
We offer high quality glazing repairs for customers in New Milton, we have a team of expert glaziers who have years of knowledge in the glazing industry. We specialise in the repair and maintenance of uPVC windows, doors, porches, conservatories, patio doors and Velux windows. We are also expert locksmiths and have over 30 years’ experience in the glazing industry. We believe that most windows and doors can be repaired saving our customers the expense of buying new windows.
|
# Parsec Cloud (https://parsec.cloud) Copyright (c) AGPLv3 2016-2021 Scille SAS
import attr
import pendulum
from uuid import UUID
from typing import List, Dict, Optional, Tuple
from parsec.api.data import UserProfile
from parsec.api.protocol import DeviceID, UserID, OrganizationID
from parsec.backend.backend_events import BackendEvent
from parsec.backend.realm import (
MaintenanceType,
RealmGrantedRole,
BaseRealmComponent,
RealmRole,
RealmStatus,
RealmStats,
RealmAccessError,
RealmIncompatibleProfileError,
RealmAlreadyExistsError,
RealmRoleAlreadyGranted,
RealmNotFoundError,
RealmEncryptionRevisionError,
RealmParticipantsMismatchError,
RealmMaintenanceError,
RealmInMaintenanceError,
RealmNotInMaintenanceError,
)
from parsec.backend.user import BaseUserComponent, UserNotFoundError
from parsec.backend.message import BaseMessageComponent
from parsec.backend.memory.vlob import MemoryVlobComponent
from parsec.backend.memory.block import MemoryBlockComponent
@attr.s
class Realm:
status: RealmStatus = attr.ib(factory=lambda: RealmStatus(None, None, None, 1))
checkpoint: int = attr.ib(default=0)
granted_roles: List[RealmGrantedRole] = attr.ib(factory=list)
@property
def roles(self):
roles = {}
for x in sorted(self.granted_roles, key=lambda x: x.granted_on):
if x.role is None:
roles.pop(x.user_id, None)
else:
roles[x.user_id] = x.role
return roles
class MemoryRealmComponent(BaseRealmComponent):
def __init__(self, send_event):
self._send_event = send_event
self._user_component = None
self._message_component = None
self._vlob_component = None
self._block_component = None
self._realms = {}
self._maintenance_reencryption_is_finished_hook = None
def register_components(
self,
user: BaseUserComponent,
message: BaseMessageComponent,
vlob: MemoryVlobComponent,
block: MemoryBlockComponent,
**other_components,
):
self._user_component = user
self._message_component = message
self._vlob_component = vlob
self._block_component = block
def _get_realm(self, organization_id, realm_id):
try:
return self._realms[(organization_id, realm_id)]
except KeyError:
raise RealmNotFoundError(f"Realm `{realm_id}` doesn't exist")
async def create(
self, organization_id: OrganizationID, self_granted_role: RealmGrantedRole
) -> None:
assert self_granted_role.granted_by is not None
assert self_granted_role.granted_by.user_id == self_granted_role.user_id
assert self_granted_role.role == RealmRole.OWNER
key = (organization_id, self_granted_role.realm_id)
if key not in self._realms:
self._realms[key] = Realm(granted_roles=[self_granted_role])
await self._send_event(
BackendEvent.REALM_ROLES_UPDATED,
organization_id=organization_id,
author=self_granted_role.granted_by,
realm_id=self_granted_role.realm_id,
user=self_granted_role.user_id,
role=self_granted_role.role,
)
else:
raise RealmAlreadyExistsError()
async def get_status(
self, organization_id: OrganizationID, author: DeviceID, realm_id: UUID
) -> RealmStatus:
realm = self._get_realm(organization_id, realm_id)
if author.user_id not in realm.roles:
raise RealmAccessError()
return realm.status
async def get_stats(
self, organization_id: OrganizationID, author: DeviceID, realm_id: UUID
) -> RealmStats:
realm = self._get_realm(organization_id, realm_id)
if author.user_id not in realm.roles:
raise RealmAccessError()
blocks_size = 0
vlobs_size = 0
for value in self._block_component._blockmetas.values():
if value.realm_id == realm_id:
blocks_size += value.size
for value in self._vlob_component._vlobs.values():
if value.realm_id == realm_id:
vlobs_size += sum(len(blob) for (blob, _, _) in value.data)
return RealmStats(blocks_size=blocks_size, vlobs_size=vlobs_size)
async def get_current_roles(
self, organization_id: OrganizationID, realm_id: UUID
) -> Dict[UserID, RealmRole]:
realm = self._get_realm(organization_id, realm_id)
roles: Dict[UserID, RealmRole] = {}
for x in realm.granted_roles:
if x.role is None:
roles.pop(x.user_id, None)
else:
roles[x.user_id] = x.role
return roles
async def get_role_certificates(
self,
organization_id: OrganizationID,
author: DeviceID,
realm_id: UUID,
since: pendulum.DateTime,
) -> List[bytes]:
realm = self._get_realm(organization_id, realm_id)
if author.user_id not in realm.roles:
raise RealmAccessError()
if since:
return [x.certificate for x in realm.granted_roles if x.granted_on > since]
else:
return [x.certificate for x in realm.granted_roles]
async def update_roles(
self,
organization_id: OrganizationID,
new_role: RealmGrantedRole,
recipient_message: Optional[bytes] = None,
) -> None:
assert new_role.granted_by is not None
assert new_role.granted_by.user_id != new_role.user_id
# The only way for an OUTSIDER to be OWNER is to create his own realm
# (given he needs to have one to store it user manifest).
try:
user = self._user_component._get_user(organization_id, new_role.user_id)
except UserNotFoundError:
raise RealmNotFoundError(f"User `{new_role.user_id}` doesn't exist")
if user.profile == UserProfile.OUTSIDER and new_role.role in (
RealmRole.MANAGER,
RealmRole.OWNER,
):
raise RealmIncompatibleProfileError(
"User with OUTSIDER profile cannot be MANAGER or OWNER"
)
realm = self._get_realm(organization_id, new_role.realm_id)
if realm.status.in_maintenance:
raise RealmInMaintenanceError("Data realm is currently under maintenance")
owner_only = (RealmRole.OWNER,)
owner_or_manager = (RealmRole.OWNER, RealmRole.MANAGER)
existing_user_role = realm.roles.get(new_role.user_id)
needed_roles: Tuple[RealmRole, ...]
if existing_user_role in owner_or_manager or new_role.role in owner_or_manager:
needed_roles = owner_only
else:
needed_roles = owner_or_manager
author_role = realm.roles.get(new_role.granted_by.user_id)
if author_role not in needed_roles:
raise RealmAccessError()
if existing_user_role == new_role.role:
raise RealmRoleAlreadyGranted()
realm.granted_roles.append(new_role)
await self._send_event(
BackendEvent.REALM_ROLES_UPDATED,
organization_id=organization_id,
author=new_role.granted_by,
realm_id=new_role.realm_id,
user=new_role.user_id,
role=new_role.role,
)
if recipient_message is not None:
await self._message_component.send(
organization_id,
new_role.granted_by,
new_role.user_id,
new_role.granted_on,
recipient_message,
)
async def start_reencryption_maintenance(
self,
organization_id: OrganizationID,
author: DeviceID,
realm_id: UUID,
encryption_revision: int,
per_participant_message: Dict[UserID, bytes],
timestamp: pendulum.DateTime,
) -> None:
realm = self._get_realm(organization_id, realm_id)
if realm.roles.get(author.user_id) != RealmRole.OWNER:
raise RealmAccessError()
if realm.status.in_maintenance:
raise RealmInMaintenanceError(f"Realm `{realm_id}` alrealy in maintenance")
if encryption_revision != realm.status.encryption_revision + 1:
raise RealmEncryptionRevisionError("Invalid encryption revision")
now = pendulum.now()
not_revoked_roles = set()
for user_id in realm.roles.keys():
user = await self._user_component.get_user(organization_id, user_id)
if not user.revoked_on or user.revoked_on > now:
not_revoked_roles.add(user_id)
if per_participant_message.keys() ^ not_revoked_roles:
raise RealmParticipantsMismatchError(
"Realm participants and message recipients mismatch"
)
realm.status = RealmStatus(
maintenance_type=MaintenanceType.REENCRYPTION,
maintenance_started_on=timestamp,
maintenance_started_by=author,
encryption_revision=encryption_revision,
)
self._vlob_component._maintenance_reencryption_start_hook(
organization_id, realm_id, encryption_revision
)
# Should first send maintenance event, then message to each participant
await self._send_event(
BackendEvent.REALM_MAINTENANCE_STARTED,
organization_id=organization_id,
author=author,
realm_id=realm_id,
encryption_revision=encryption_revision,
)
for recipient, msg in per_participant_message.items():
await self._message_component.send(organization_id, author, recipient, timestamp, msg)
async def finish_reencryption_maintenance(
self,
organization_id: OrganizationID,
author: DeviceID,
realm_id: UUID,
encryption_revision: int,
) -> None:
realm = self._get_realm(organization_id, realm_id)
if realm.roles.get(author.user_id) != RealmRole.OWNER:
raise RealmAccessError()
if not realm.status.in_maintenance:
raise RealmNotInMaintenanceError(f"Realm `{realm_id}` not under maintenance")
if encryption_revision != realm.status.encryption_revision:
raise RealmEncryptionRevisionError("Invalid encryption revision")
if not self._vlob_component._maintenance_reencryption_is_finished_hook(
organization_id, realm_id, encryption_revision
):
raise RealmMaintenanceError("Reencryption operations are not over")
realm.status = RealmStatus(
maintenance_type=None,
maintenance_started_on=None,
maintenance_started_by=None,
encryption_revision=encryption_revision,
)
await self._send_event(
BackendEvent.REALM_MAINTENANCE_FINISHED,
organization_id=organization_id,
author=author,
realm_id=realm_id,
encryption_revision=encryption_revision,
)
async def get_realms_for_user(
self, organization_id: OrganizationID, user: UserID
) -> Dict[UUID, RealmRole]:
user_realms = {}
for (realm_org_id, realm_id), realm in self._realms.items():
if realm_org_id != organization_id:
continue
try:
user_realms[realm_id] = realm.roles[user]
except KeyError:
pass
return user_realms
|
Cloister Medieval Letter Tile - - 8" square aluminum wall tile holds one stylized capital letter. Wall mount. Made in USA.
Combine cloister tiles to spell place names, other words, or your name.
Each tile is heavily embellished with floral accents.
Color combinations: BG black plaque with gold characters; AC antique copper; OG bronze gold.
8" x 8" cloister style tiles with individual letters.
|
import sys
import json
import requests
# creates outbound message from alert payload contents
# and attempts to send to the specified endpoint
def send_message(payload):
config = payload.get('configuration')
# Get the Tile Endpoint URL
jive_url = config.get('jive_url')
# create outbound JSON message body
body = json.dumps({
"app" : payload.get("app"),
"owner" : payload.get("owner"),
"results_file" : payload.get("results_file"),
"results_link" : payload.get("results_link"),
"server_host" : payload.get("server_host"),
"server_uri" : payload.get("server_uri"),
"session_key" : payload.get("session_key"),
"sid" : payload.get("sid"),
"search_name" : payload.get("search_name"),
"result" : payload.get("result"),
})
# create outbound request object
try:
headers = {"Content-Type": "application/json"}
result = requests.post(url=jive_url, data=body, headers=headers)
#TODO: CHANGE THIS TO result.statuscode or result.code
print >>sys.stderr, "INFO Jive HTTP Response [%s] - [%s]" % (result.text, result.text)
except Exception, e:
print >> sys.stderr, "ERROR Error sending message: %s" % e
return False
if __name__ == "__main__":
if len(sys.argv) > 1 and sys.argv[1] == "--execute":
try:
# retrieving message payload from splunk
raw_payload = sys.stdin.read()
payload = json.loads(raw_payload)
send_message(payload)
except Exception, e:
print >> sys.stderr, "ERROR Unexpected error: %s" % e
sys.exit(3)
else:
print >> sys.stderr, "FATAL Unsupported execution mode (expected --execute flag)"
sys.exit(1)
|
MILACA is located in MILLE LACS county, Minnesota at latitude 45.756741 and longitude -93.651423 with an elevation of 1079 feet. The population of MILACA is 2,954 as of the year 2005 with a population growth of 14.50% from the year 2000 through 2005. The most common zip code of MILACA is 56353.
|
import argparse
import os
import platform
import shutil
import subprocess
import sys
import tarfile
from distutils.spawn import find_executable
wpt_root = os.path.abspath(os.path.join(os.path.dirname(__file__), os.pardir, os.pardir))
sys.path.insert(0, os.path.abspath(os.path.join(wpt_root, "tools")))
from . import browser, utils, virtualenv
logger = None
class WptrunError(Exception):
pass
class WptrunnerHelpAction(argparse.Action):
def __init__(self,
option_strings,
dest=argparse.SUPPRESS,
default=argparse.SUPPRESS,
help=None):
super(WptrunnerHelpAction, self).__init__(
option_strings=option_strings,
dest=dest,
default=default,
nargs=0,
help=help)
def __call__(self, parser, namespace, values, option_string=None):
from wptrunner import wptcommandline
wptparser = wptcommandline.create_parser()
wptparser.usage = parser.usage
wptparser.print_help()
parser.exit()
def create_parser():
from wptrunner import wptcommandline
parser = argparse.ArgumentParser(add_help=False)
parser.add_argument("product", action="store",
help="Browser to run tests in")
parser.add_argument("--yes", "-y", dest="prompt", action="store_false", default=True,
help="Don't prompt before installing components")
parser.add_argument("--stability", action="store_true",
help="Stability check tests")
parser.add_argument("--install-browser", action="store_true",
help="Install the latest development version of the browser")
parser._add_container_actions(wptcommandline.create_parser())
return parser
def exit(msg):
logger.error(msg)
sys.exit(1)
def args_general(kwargs):
kwargs.set_if_none("tests_root", wpt_root)
kwargs.set_if_none("metadata_root", wpt_root)
kwargs.set_if_none("manifest_update", True)
if kwargs["ssl_type"] in (None, "pregenerated"):
cert_root = os.path.join(wpt_root, "tools", "certs")
if kwargs["ca_cert_path"] is None:
kwargs["ca_cert_path"] = os.path.join(cert_root, "cacert.pem")
if kwargs["host_key_path"] is None:
kwargs["host_key_path"] = os.path.join(cert_root, "web-platform.test.key")
if kwargs["host_cert_path"] is None:
kwargs["host_cert_path"] = os.path.join(cert_root, "web-platform.test.pem")
elif kwargs["ssl_type"] == "openssl":
if not find_executable(kwargs["openssl_binary"]):
if os.uname()[0] == "Windows":
raise WptrunError("""OpenSSL binary not found. If you need HTTPS tests, install OpenSSL from
https://slproweb.com/products/Win32OpenSSL.html
Ensuring that libraries are added to /bin and add the resulting bin directory to
your PATH.
Otherwise run with --ssl-type=none""")
else:
raise WptrunError("""OpenSSL not found. If you don't need HTTPS support run with --ssl-type=none,
otherwise install OpenSSL and ensure that it's on your $PATH.""")
def check_environ(product):
if product not in ("firefox", "servo"):
expected_hosts = ["web-platform.test",
"www.web-platform.test",
"www1.web-platform.test",
"www2.web-platform.test",
"xn--n8j6ds53lwwkrqhv28a.web-platform.test",
"xn--lve-6lad.web-platform.test",
"nonexistent-origin.web-platform.test"]
missing_hosts = set(expected_hosts)
if platform.uname()[0] != "Windows":
hosts_path = "/etc/hosts"
else:
hosts_path = "C:\Windows\System32\drivers\etc\hosts"
with open(hosts_path, "r") as f:
for line in f:
line = line.split("#", 1)[0].strip()
parts = line.split()
if len(parts) == 2:
host = parts[1]
missing_hosts.discard(host)
if missing_hosts:
raise WptrunError("""Missing hosts file configuration. Expected entries like:
%s
See README.md for more details.""" % "\n".join("%s\t%s" %
("127.0.0.1" if "nonexistent" not in host else "0.0.0.0", host)
for host in expected_hosts))
class BrowserSetup(object):
name = None
browser_cls = None
def __init__(self, venv, prompt=True, sub_product=None):
self.browser = self.browser_cls()
self.venv = venv
self.prompt = prompt
self.sub_product = sub_product
def prompt_install(self, component):
if not self.prompt:
return True
while True:
resp = raw_input("Download and install %s [Y/n]? " % component).strip().lower()
if not resp or resp == "y":
return True
elif resp == "n":
return False
def install(self, venv):
if self.prompt_install(self.name):
return self.browser.install(venv.path)
def setup(self, kwargs):
self.venv.install_requirements(os.path.join(wpt_root, "tools", "wptrunner", self.browser.requirements))
self.setup_kwargs(kwargs)
class Firefox(BrowserSetup):
name = "firefox"
browser_cls = browser.Firefox
def setup_kwargs(self, kwargs):
if kwargs["binary"] is None:
binary = self.browser.find_binary()
if binary is None:
raise WptrunError("""Firefox binary not found on $PATH.
Install Firefox or use --binary to set the binary path""")
kwargs["binary"] = binary
if kwargs["certutil_binary"] is None and kwargs["ssl_type"] != "none":
certutil = self.browser.find_certutil()
if certutil is None:
# Can't download this for now because it's missing the libnss3 library
raise WptrunError("""Can't find certutil.
This must be installed using your OS package manager or directly e.g.
Debian/Ubuntu:
sudo apt install libnss3-tools
macOS/Homebrew:
brew install nss
Others:
Download the firefox archive and common.tests.zip archive for your platform
from https://archive.mozilla.org/pub/firefox/nightly/latest-mozilla-central/
Then extract certutil[.exe] from the tests.zip package and
libnss3[.so|.dll|.dynlib] and but the former on your path and the latter on
your library path.
""")
else:
print("Using certutil %s" % certutil)
if certutil is not None:
kwargs["certutil_binary"] = certutil
else:
print("Unable to find or install certutil, setting ssl-type to none")
kwargs["ssl_type"] = "none"
if kwargs["webdriver_binary"] is None and "wdspec" in kwargs["test_types"]:
webdriver_binary = self.browser.find_webdriver()
if webdriver_binary is None:
install = self.prompt_install("geckodriver")
if install:
print("Downloading geckodriver")
webdriver_binary = self.browser.install_webdriver(dest=self.venv.bin_path)
else:
print("Using webdriver binary %s" % webdriver_binary)
if webdriver_binary:
kwargs["webdriver_binary"] = webdriver_binary
else:
print("Unable to find or install geckodriver, skipping wdspec tests")
kwargs["test_types"].remove("wdspec")
if kwargs["prefs_root"] is None:
print("Downloading gecko prefs")
prefs_root = self.browser.install_prefs(self.venv.path)
kwargs["prefs_root"] = prefs_root
class Chrome(BrowserSetup):
name = "chrome"
browser_cls = browser.Chrome
def setup_kwargs(self, kwargs):
if kwargs["webdriver_binary"] is None:
webdriver_binary = self.browser.find_webdriver()
if webdriver_binary is None:
install = self.prompt_install("chromedriver")
if install:
print("Downloading chromedriver")
webdriver_binary = self.browser.install_webdriver(dest=self.venv.bin_path)
else:
print("Using webdriver binary %s" % webdriver_binary)
if webdriver_binary:
kwargs["webdriver_binary"] = webdriver_binary
else:
raise WptrunError("Unable to locate or install chromedriver binary")
class Edge(BrowserSetup):
name = "edge"
browser_cls = browser.Edge
def install(self, venv):
raise NotImplementedError
def setup_kwargs(self, kwargs):
if kwargs["webdriver_binary"] is None:
webdriver_binary = self.browser.find_webdriver()
if webdriver_binary is None:
raise WptrunError("""Unable to find WebDriver and we aren't yet clever enough to work out which
version to download. Please go to the following URL and install the correct
version for your Edge/Windows release somewhere on the %PATH%:
https://developer.microsoft.com/en-us/microsoft-edge/tools/webdriver/
""")
kwargs["webdriver_binary"] = webdriver_binary
class InternetExplorer(BrowserSetup):
name = "ie"
browser_cls = browser.InternetExplorer
def install(self, venv):
raise NotImplementedError
def setup_kwargs(self, kwargs):
if kwargs["webdriver_binary"] is None:
webdriver_binary = self.browser.find_webdriver()
if webdriver_binary is None:
raise WptrunError("""Unable to find WebDriver and we aren't yet clever enough to work out which
version to download. Please go to the following URL and install the driver for Internet Explorer
somewhere on the %PATH%:
https://selenium-release.storage.googleapis.com/index.html
""")
kwargs["webdriver_binary"] = webdriver_binary
class Sauce(BrowserSetup):
name = "sauce"
browser_cls = browser.Sauce
def install(self, venv):
raise NotImplementedError
def setup_kwargs(self, kwargs):
kwargs.set_if_none("sauce_browser", self.sub_product[0])
kwargs.set_if_none("sauce_version", self.sub_product[1])
kwargs["test_types"] = ["testharness", "reftest"]
class Servo(BrowserSetup):
name = "servo"
browser_cls = browser.Servo
def install(self, venv):
raise NotImplementedError
def setup_kwargs(self, kwargs):
if kwargs["binary"] is None:
binary = self.browser.find_binary()
if binary is None:
raise WptrunError("Unable to find servo binary on the PATH")
kwargs["binary"] = binary
product_setup = {
"firefox": Firefox,
"chrome": Chrome,
"edge": Edge,
"ie": InternetExplorer,
"servo": Servo,
"sauce": Sauce,
}
def setup_wptrunner(venv, prompt=True, install=False, **kwargs):
from wptrunner import wptrunner, wptcommandline
global logger
kwargs = utils.Kwargs(kwargs.iteritems())
product_parts = kwargs["product"].split(":")
kwargs["product"] = product_parts[0]
sub_product = product_parts[1:]
wptrunner.setup_logging(kwargs, {"mach": sys.stdout})
logger = wptrunner.logger
check_environ(kwargs["product"])
args_general(kwargs)
if kwargs["product"] not in product_setup:
raise WptrunError("Unsupported product %s" % kwargs["product"])
setup_cls = product_setup[kwargs["product"]](venv, prompt, sub_product)
if install:
logger.info("Installing browser")
kwargs["binary"] = setup_cls.install(venv)
setup_cls.setup(kwargs)
wptcommandline.check_args(kwargs)
wptrunner_path = os.path.join(wpt_root, "tools", "wptrunner")
venv.install_requirements(os.path.join(wptrunner_path, "requirements.txt"))
return kwargs
def run(venv, **kwargs):
#Remove arguments that aren't passed to wptrunner
prompt = kwargs.pop("prompt", True)
stability = kwargs.pop("stability", True)
install_browser = kwargs.pop("install_browser", False)
kwargs = setup_wptrunner(venv,
prompt=prompt,
install=install_browser,
**kwargs)
if stability:
import stability
iterations, results, inconsistent = stability.run(venv, logger, **kwargs)
def log(x):
print(x)
if inconsistent:
stability.write_inconsistent(log, inconsistent, iterations)
else:
log("All tests stable")
rv = len(inconsistent) > 0
else:
rv = run_single(venv, **kwargs) > 0
return rv
def run_single(venv, **kwargs):
from wptrunner import wptrunner
return wptrunner.start(**kwargs)
def main():
try:
parser = create_parser()
args = parser.parse_args()
venv = virtualenv.Virtualenv(os.path.join(wpt_root, "_venv_%s") % platform.uname()[0])
venv.start()
venv.install_requirements(os.path.join(wpt_root, "tools", "wptrunner", "requirements.txt"))
venv.install("requests")
return run(venv, vars(args))
except WptrunError as e:
exit(e.message)
if __name__ == "__main__":
import pdb
from tools import localpaths
try:
main()
except:
pdb.post_mortem()
|
Find the right tour for you through Spahats Falls. We've got 7 tours going to Spahats Falls, starting from just 4 days in length, and the longest tour is 15 days. The most popular month to go is July, which has the most number of tour departures.
"We were on the last tour of the season so some things had closed down but we got..."
"A great trip, great group, brilliant guide and experiences that took our breath away"
"It was very good. We really enjoyed despite the initial hiccups. The guide and driver..."
"That’s awesome tour i hope join next time"
|
from mock import (
call, patch, Mock
)
from kiwi.container.docker import ContainerImageDocker
class TestContainerImageDocker(object):
@patch('kiwi.container.docker.Compress')
@patch('kiwi.container.docker.Command.run')
@patch('kiwi.container.oci.RuntimeConfig')
@patch('kiwi.container.oci.OCI')
def test_pack_image_to_file(
self, mock_OCI, mock_RuntimeConfig, mock_command, mock_compress
):
oci = Mock()
oci.container_name = 'kiwi_oci_dir.XXXX/oci_layout:latest'
mock_OCI.return_value = oci
compressor = Mock()
compressor.xz = Mock(
return_value='result.tar.xz'
)
mock_compress.return_value = compressor
docker = ContainerImageDocker(
'root_dir', {
'container_name': 'foo/bar',
'additional_tags': ['current', 'foobar']
}
)
docker.runtime_config.get_container_compression = Mock(
return_value='xz'
)
assert docker.pack_image_to_file('result.tar') == 'result.tar.xz'
assert mock_command.call_args_list == [
call(['rm', '-r', '-f', 'result.tar']),
call([
'skopeo', 'copy', 'oci:kiwi_oci_dir.XXXX/oci_layout:latest',
'docker-archive:result.tar:foo/bar:latest',
'--additional-tag', 'foo/bar:current',
'--additional-tag', 'foo/bar:foobar'
])
]
mock_compress.assert_called_once_with('result.tar')
compressor.xz.assert_called_once_with(
docker.runtime_config.get_xz_options.return_value
)
docker.runtime_config.get_container_compression = Mock(
return_value=None
)
assert docker.pack_image_to_file('result.tar') == 'result.tar'
|
Town of Mukwonago Gorgeous split ranch on 1.5 acres Reduced! Drive up & enjoy the amazing curb appeal & quiet no through subdivision. Beautiful 5 year new split bedroom Ranch located in the LOW TAX district of the Town of Mukwonago. Situated on 1.5 acres with mature Hickory trees and covered front porch. Beautiful Kitchen with rare Knotty Alder cabinets, stove & fridge included. Distressed Oak flooring adds to the rustic/contempory feeling of a custom home. Huge living room w/11 foot ceilings and natural fireplace. The Master Bath has a Kohler Steam Shower, whirlpool tub & a tiled floor. Solid six panel doors throughout the home. Amazing lower level with 8 1/2 foot ceilings, 10' Wet Bar, fitness room, play room & a 3rd full bath. 3.5 car attached garage, main level laundry. Check it out & bring us an offer!
|
"""
Unit tests for masquerade
Based on (and depends on) unit tests for courseware.
Notes for running by hand:
./manage.py lms --settings test test lms/djangoapps/courseware
"""
import json
from django.core.urlresolvers import reverse
from django.test.utils import override_settings
from opaque_keys.edx.locations import SlashSeparatedCourseKey
from courseware.tests.factories import StaffFactory
from courseware.tests.helpers import LoginEnrollmentTestCase
from lms.djangoapps.lms_xblock.runtime import quote_slashes
from xmodule.modulestore.django import modulestore, clear_existing_modulestores
from xmodule.modulestore.tests.django_utils import ModuleStoreTestCase
from xmodule.modulestore.tests.django_utils import TEST_DATA_MIXED_GRADED_MODULESTORE
# TODO: the "abtest" node in the sample course "graded" is currently preventing
# it from being successfully loaded in the mongo modulestore.
# Fix this testcase class to not depend on that course, and let it use
# the mocked modulestore instead of the XML.
@override_settings(MODULESTORE=TEST_DATA_MIXED_GRADED_MODULESTORE)
class TestStaffMasqueradeAsStudent(ModuleStoreTestCase, LoginEnrollmentTestCase):
"""
Check for staff being able to masquerade as student.
"""
def setUp(self):
# Clear out the modulestores, causing them to reload
clear_existing_modulestores()
self.graded_course = modulestore().get_course(SlashSeparatedCourseKey("edX", "graded", "2012_Fall"))
# Create staff account
self.staff = StaffFactory(course_key=self.graded_course.id)
self.logout()
# self.staff.password is the sha hash but login takes the plain text
self.login(self.staff.email, 'test')
self.enroll(self.graded_course)
def get_cw_section(self):
url = reverse('courseware_section',
kwargs={'course_id': self.graded_course.id.to_deprecated_string(),
'chapter': 'GradedChapter',
'section': 'Homework1'})
resp = self.client.get(url)
print "url ", url
return resp
def test_staff_debug_for_staff(self):
resp = self.get_cw_section()
sdebug = 'Staff Debug Info'
print resp.content
self.assertTrue(sdebug in resp.content)
def toggle_masquerade(self):
"""
Toggle masquerade state.
"""
masq_url = reverse('masquerade-switch', kwargs={'marg': 'toggle'})
print "masq_url ", masq_url
resp = self.client.get(masq_url)
return resp
def test_no_staff_debug_for_student(self):
togresp = self.toggle_masquerade()
print "masq now ", togresp.content
self.assertEqual(togresp.content, '{"status": "student"}', '')
resp = self.get_cw_section()
sdebug = 'Staff Debug Info'
self.assertFalse(sdebug in resp.content)
def get_problem(self):
pun = 'H1P1'
problem_location = self.graded_course.id.make_usage_key("problem", pun)
modx_url = reverse('xblock_handler',
kwargs={'course_id': self.graded_course.id.to_deprecated_string(),
'usage_id': quote_slashes(problem_location.to_deprecated_string()),
'handler': 'xmodule_handler',
'suffix': 'problem_get'})
resp = self.client.get(modx_url)
print "modx_url ", modx_url
return resp
def test_showanswer_for_staff(self):
resp = self.get_problem()
html = json.loads(resp.content)['html']
print html
sabut = '<button class="show"><span class="show-label">Show Answer</span> <span class="sr">Reveal Answer</span></button>'
self.assertTrue(sabut in html)
def test_no_showanswer_for_student(self):
togresp = self.toggle_masquerade()
print "masq now ", togresp.content
self.assertEqual(togresp.content, '{"status": "student"}', '')
resp = self.get_problem()
html = json.loads(resp.content)['html']
sabut = '<button class="show"><span class="show-label" aria-hidden="true">Show Answer</span> <span class="sr">Reveal answer above</span></button>'
self.assertFalse(sabut in html)
|
Always a busy month in the garden! You’ll be reaping the rewards from the vegetable garden as well as getting the garden ready for winter/next season.
Plant spring flowering bulbs such as daffodils, crocus and hyacinth bulbs for spring flowering.
Divide herbaceous perennials such as Cyclamen, Salvia and Delphinium and water in well.
Dig up any remaining potatoes before slugs tuck in.
Continue to deadhead and feed hanging baskets/containers and you should enjoy colour until first frost.
September is the perfect time to plant Trees and Shrubs out for vigorous growth next spring.
Prune climbing roses and rambling roses after flowering (not if they are repeat flowering).
Clear out your Greenhouse and cold frames.
|
# coding=utf-8
# Copyright 2019 The RecSim Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Setup script for RecSim.
This script will install RecSim as a Python module.
See: https://github.com/google-research/recsim
"""
from os import path
from setuptools import find_packages
from setuptools import setup
here = path.abspath(path.dirname(__file__))
install_requires = [
'absl-py',
'dopamine-rl >= 2.0.5',
'gin-config',
'gym',
'numpy',
'scipy',
'tensorflow',
]
recsim_description = (
'RecSim: A Configurable Recommender Systems Simulation Platform')
with open('README.md', 'r') as fh:
long_description = fh.read()
setup(
name='recsim',
version='0.2.4',
author='The RecSim Team',
author_email='no-reply@google.com',
description=recsim_description,
long_description=long_description,
long_description_content_type='text/markdown',
url='https://github.com/google-research/recsim',
packages=find_packages(exclude=['docs']),
classifiers=[ # Optional
'Development Status :: 3 - Alpha',
# Indicate who your project is intended for
'Intended Audience :: Developers',
'Intended Audience :: Education',
'Intended Audience :: Science/Research',
# Pick your license as you wish
'License :: OSI Approved :: Apache Software License',
# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
'Programming Language :: Python :: 2',
'Programming Language :: Python :: 3',
'Topic :: Scientific/Engineering',
'Topic :: Scientific/Engineering :: Mathematics',
'Topic :: Scientific/Engineering :: Artificial Intelligence',
'Topic :: Software Development',
'Topic :: Software Development :: Libraries',
'Topic :: Software Development :: Libraries :: Python Modules',
],
install_requires=install_requires,
project_urls={ # Optional
'Documentation': 'https://github.com/google-research/recsim',
'Bug Reports': 'https://github.com/google-research/recsim/issues',
'Source': 'https://github.com/google-research/recsim',
},
license='Apache 2.0',
keywords='recsim reinforcement-learning recommender-system simulation'
)
|
In the current column on ChessPublishing.com the author draws your attention to two theoretical battles in the recent World Championship Match-Ed.
A world championship match is an event eagerly awaited by all players and for opening aficionados it’s a special occasion as novelties are revealed at the highest level of chess. From this point of view the recently concluded Anand-Carlsen Match first seemed to disappoint. To all appearances the Match was decided in queenless middlegames or endgames. But then appearances can be deceptive. The opening phase of the games in this Match was delicately nuanced and what one saw was the proverbial tip of the iceberg. The subtleties remained beneath the surface.
Here Anand played 15.Ne4 exchanging the knights. Rendle prefers a different path with 15. Ne2!? threatening g4 with attacking chances. So why didn’t Vishy play this line? He had already been taken by surprise with Carlsen’s Caro-Kann and did not want to walk into an ambush so early in the Match. 15. Ne4 clarifies the position, giving away nothing.
On his part Carlsen did not repeat the Caro-Kann in the Match. Anand could have come up with an improvement on this line.
The second position arose after Carlsen played 17….Qd5.
Here Anand played 18.Qxd5 again to the disappointment of viewers all over the world.
It’s true that Capablanca played…c4 in a similar position and was outplayed by Botvinnik in a classic game (download pgn).
But he never claimed that this move was right and (wisely!) no grandmaster has tried it thereafter.
As for the present game, notwithstanding Carlsen’s intentions to disrupt White’s development Anand did achieve a harmonious position. Indeed, it could have become overwhelming on more than one occasion but for the extraordinary tension of the moment. As for Carlsen, I don’t think he would repeat it in future.
So what should Black play here? The standard move 8…0-0 suggests itself. The other move 8…Qc7 recommended by Emms leads to some terrific complications. But as his own analysis shows, White is rather better after all those forced moves. This does not mean that 8… Qc7 is a bad move. Perhaps Black should complete development before entering such complications.
In any case we owe a debt to John Emms for he is one of the few commentators to have seen the point of Anand’s 28. Nf1 in this game and explained it to readers.
I am afraid the developments in the recent Match have taken too much space.
There is much else to learn from the November updates on this site.
Among others do not miss the game Sasikiran-Shirov (European Cup 2013) from Ruslan Scherbakov’s update on d4 d5 openings.
So is David Vigorito’s update on the King’s Indian.
Here is a fascinating position.
White has just played 14.Nb5!?
|
# coding=utf-8
# Copyright 2016 Pants project contributors (see CONTRIBUTORS.md).
# Licensed under the Apache License, Version 2.0 (see LICENSE).
from __future__ import (absolute_import, division, generators, nested_scopes, print_function,
unicode_literals, with_statement)
import os
import shutil
import subprocess
from contextlib import contextmanager
from textwrap import dedent
from pants.base.build_environment import get_buildroot
from pants.util.contextutil import environment_as, temporary_dir
from pants.util.dirutil import safe_mkdir, safe_open, touch
from pants_test.base_test import TestGenerator
from pants_test.pants_run_integration_test import PantsRunIntegrationTest, ensure_engine
from pants_test.testutils.git_util import initialize_repo
def lines_to_set(str_or_list):
if isinstance(str_or_list, list):
return set(str_or_list)
else:
return set(x for x in str(str_or_list).split('\n') if x)
@contextmanager
def mutated_working_copy(files_to_mutate, to_append='\n '):
"""Given a list of files, append whitespace to each of them to trigger a git diff - then reset."""
assert to_append, 'to_append may not be empty'
for f in files_to_mutate:
with open(f, 'ab') as fh:
fh.write(to_append)
try:
yield
finally:
seek_point = len(to_append) * -1
for f in files_to_mutate:
with open(f, 'ab') as fh:
fh.seek(seek_point, os.SEEK_END)
fh.truncate()
@contextmanager
def create_isolated_git_repo():
# Isolated Git Repo Structure:
# worktree
# |--README
# |--pants.ini
# |--3rdparty
# |--BUILD
# |--src
# |--resources
# |--org/pantsbuild/resourceonly
# |--BUILD
# |--README.md
# |--java
# |--org/pantsbuild/helloworld
# |--BUILD
# |--helloworld.java
# |--python
# |--python_targets
# |--BUILD
# |--test_binary.py
# |--test_library.py
# |--test_unclaimed_src.py
# |--sources
# |--BUILD
# |--sources.py
# |--sources.txt
# |--tests
# |--scala
# |--org/pantsbuild/cp-directories
# |--BUILD
# |--ClasspathDirectoriesSpec.scala
with temporary_dir(root_dir=get_buildroot()) as worktree:
with safe_open(os.path.join(worktree, 'README'), 'w') as fp:
fp.write('Just a test tree.')
# Create an empty pants config file.
touch(os.path.join(worktree, 'pants.ini'))
# Copy .gitignore to new repo.
shutil.copyfile('.gitignore', os.path.join(worktree, '.gitignore'))
with initialize_repo(worktree=worktree, gitdir=os.path.join(worktree, '.git')) as git:
# Resource File
resource_file = os.path.join(worktree, 'src/resources/org/pantsbuild/resourceonly/README.md')
with safe_open(resource_file, 'w') as fp:
fp.write('Just resource.')
resource_build_file = os.path.join(worktree, 'src/resources/org/pantsbuild/resourceonly/BUILD')
with safe_open(resource_build_file, 'w') as fp:
fp.write(dedent("""
resources(
name='resource',
sources=['README.md'],
)
"""))
git.add(resource_file, resource_build_file)
git.commit('Check in a resource target.')
# Java Program
src_file = os.path.join(worktree, 'src/java/org/pantsbuild/helloworld/helloworld.java')
with safe_open(src_file, 'w') as fp:
fp.write(dedent("""
package org.pantsbuild.helloworld;
class HelloWorld {
public static void main(String[] args) {
System.out.println("Hello, World!\n");
}
}
"""))
src_build_file = os.path.join(worktree, 'src/java/org/pantsbuild/helloworld/BUILD')
with safe_open(src_build_file, 'w') as fp:
fp.write(dedent("""
jvm_binary(
dependencies=[
'{}',
],
source='helloworld.java',
main='org.pantsbuild.helloworld.HelloWorld',
)
""".format('src/resources/org/pantsbuild/resourceonly:resource')))
git.add(src_file, src_build_file)
git.commit('hello world java program with a dependency on a resource file.')
# Scala Program
scala_src_dir = os.path.join(worktree, 'tests/scala/org/pantsbuild/cp-directories')
safe_mkdir(os.path.dirname(scala_src_dir))
shutil.copytree('testprojects/tests/scala/org/pantsbuild/testproject/cp-directories', scala_src_dir)
git.add(scala_src_dir)
git.commit('Check in a scala test target.')
# Python library and binary
python_src_dir = os.path.join(worktree, 'src/python/python_targets')
safe_mkdir(os.path.dirname(python_src_dir))
shutil.copytree('testprojects/src/python/python_targets', python_src_dir)
git.add(python_src_dir)
git.commit('Check in python targets.')
# A `python_library` with `resources=['file.name']`.
python_src_dir = os.path.join(worktree, 'src/python/sources')
safe_mkdir(os.path.dirname(python_src_dir))
shutil.copytree('testprojects/src/python/sources', python_src_dir)
git.add(python_src_dir)
git.commit('Check in a python library with resource dependency.')
# Copy 3rdparty/BUILD.
_3rdparty_build = os.path.join(worktree, '3rdparty/BUILD')
safe_mkdir(os.path.dirname(_3rdparty_build))
shutil.copyfile('3rdparty/BUILD', _3rdparty_build)
git.add(_3rdparty_build)
git.commit('Check in 3rdparty/BUILD.')
with environment_as(PANTS_BUILDROOT_OVERRIDE=worktree):
yield worktree
class ChangedIntegrationTest(PantsRunIntegrationTest, TestGenerator):
TEST_MAPPING = {
# A `jvm_binary` with `source='file.name'`.
'src/java/org/pantsbuild/helloworld/helloworld.java': dict(
none=['src/java/org/pantsbuild/helloworld:helloworld'],
direct=['src/java/org/pantsbuild/helloworld:helloworld'],
transitive=['src/java/org/pantsbuild/helloworld:helloworld']
),
# A `python_binary` with `source='file.name'`.
'src/python/python_targets/test_binary.py': dict(
none=['src/python/python_targets:test'],
direct=['src/python/python_targets:test'],
transitive=['src/python/python_targets:test']
),
# A `python_library` with `sources=['file.name']`.
'src/python/python_targets/test_library.py': dict(
none=['src/python/python_targets:test_library'],
direct=['src/python/python_targets:test',
'src/python/python_targets:test_library',
'src/python/python_targets:test_library_direct_dependee'],
transitive=['src/python/python_targets:test',
'src/python/python_targets:test_library',
'src/python/python_targets:test_library_direct_dependee',
'src/python/python_targets:test_library_transitive_dependee',
'src/python/python_targets:test_library_transitive_dependee_2',
'src/python/python_targets:test_library_transitive_dependee_3',
'src/python/python_targets:test_library_transitive_dependee_4']
),
# A `resources` target with `sources=['file.name']` referenced by a `java_library` target.
'src/resources/org/pantsbuild/resourceonly/README.md': dict(
none=['src/resources/org/pantsbuild/resourceonly:resource'],
direct=['src/java/org/pantsbuild/helloworld:helloworld',
'src/resources/org/pantsbuild/resourceonly:resource'],
transitive=['src/java/org/pantsbuild/helloworld:helloworld',
'src/resources/org/pantsbuild/resourceonly:resource'],
),
# A `python_library` with `resources=['file.name']`.
'src/python/sources/sources.txt': dict(
none=['src/python/sources:sources'],
direct=['src/python/sources:sources'],
transitive=['src/python/sources:sources']
),
# A `scala_library` with `sources=['file.name']`.
'tests/scala/org/pantsbuild/cp-directories/ClasspathDirectoriesSpec.scala': dict(
none=['tests/scala/org/pantsbuild/cp-directories:cp-directories'],
direct=['tests/scala/org/pantsbuild/cp-directories:cp-directories'],
transitive=['tests/scala/org/pantsbuild/cp-directories:cp-directories']
),
# An unclaimed source file.
'src/python/python_targets/test_unclaimed_src.py': dict(
none=[],
direct=[],
transitive=[]
)
}
@classmethod
def generate_tests(cls):
"""Generates tests on the class for better reporting granularity than an opaque for loop test."""
def safe_filename(f):
return f.replace('/', '_').replace('.', '_')
for filename, dependee_mapping in cls.TEST_MAPPING.items():
for dependee_type in dependee_mapping.keys():
# N.B. The parameters here are used purely to close over the respective loop variables.
def inner_integration_coverage_test(self, filename=filename, dependee_type=dependee_type):
with create_isolated_git_repo() as worktree:
# Mutate the working copy so we can do `--changed-parent=HEAD` deterministically.
with mutated_working_copy([os.path.join(worktree, filename)]):
stdout = self.assert_changed_new_equals_old(
['--changed-include-dependees={}'.format(dependee_type), '--changed-parent=HEAD'],
test_list=True
)
self.assertEqual(
lines_to_set(self.TEST_MAPPING[filename][dependee_type]),
lines_to_set(stdout)
)
cls.add_test(
'test_changed_coverage_{}_{}'.format(dependee_type, safe_filename(filename)),
inner_integration_coverage_test
)
def assert_changed_new_equals_old(self, extra_args, success=True, test_list=False):
args = ['-q', 'changed'] + extra_args
changed_run = self.do_command(*args, success=success, enable_v2_engine=False)
engine_changed_run = self.do_command(*args, success=success, enable_v2_engine=True)
self.assertEqual(
lines_to_set(changed_run.stdout_data), lines_to_set(engine_changed_run.stdout_data)
)
if test_list:
# In the v2 engine, `--changed-*` options can alter the specs of any goal - test with `list`.
list_args = ['-q', 'list'] + extra_args
engine_list_run = self.do_command(*list_args, success=success, enable_v2_engine=True)
self.assertEqual(
lines_to_set(changed_run.stdout_data), lines_to_set(engine_list_run.stdout_data)
)
# If we get to here without asserting, we know all copies of stdout are identical - return one.
return changed_run.stdout_data
@ensure_engine
def test_changed_options_scope_shadowing(self):
"""Tests that the `test-changed` scope overrides `changed` scope."""
changed_src = 'src/python/python_targets/test_library.py'
expected_target = self.TEST_MAPPING[changed_src]['none'][0]
expected_set = {expected_target}
not_expected_set = set(self.TEST_MAPPING[changed_src]['transitive']).difference(expected_set)
with create_isolated_git_repo() as worktree:
with mutated_working_copy([os.path.join(worktree, changed_src)]):
pants_run = self.run_pants([
'-ldebug', # This ensures the changed target name shows up in the pants output.
'test-changed',
'--test-changed-changes-since=HEAD',
'--test-changed-include-dependees=none', # This option should be used.
'--changed-include-dependees=transitive' # This option should be stomped on.
])
self.assert_success(pants_run)
for expected_item in expected_set:
self.assertIn(expected_item, pants_run.stdout_data)
for not_expected_item in not_expected_set:
if expected_target.startswith(not_expected_item):
continue # Ignore subset matches.
self.assertNotIn(not_expected_item, pants_run.stdout_data)
@ensure_engine
def test_changed_options_scope_positional(self):
changed_src = 'src/python/python_targets/test_library.py'
expected_set = set(self.TEST_MAPPING[changed_src]['transitive'])
with create_isolated_git_repo() as worktree:
with mutated_working_copy([os.path.join(worktree, changed_src)]):
pants_run = self.run_pants([
'-ldebug', # This ensures the changed target names show up in the pants output.
'test-changed',
'--changes-since=HEAD',
'--include-dependees=transitive'
])
self.assert_success(pants_run)
for expected_item in expected_set:
self.assertIn(expected_item, pants_run.stdout_data)
@ensure_engine
def test_test_changed_exclude_target(self):
changed_src = 'src/python/python_targets/test_library.py'
exclude_target_regexp = r'_[0-9]'
excluded_set = {'src/python/python_targets:test_library_transitive_dependee_2',
'src/python/python_targets:test_library_transitive_dependee_3',
'src/python/python_targets:test_library_transitive_dependee_4'}
expected_set = set(self.TEST_MAPPING[changed_src]['transitive']) - excluded_set
with create_isolated_git_repo() as worktree:
with mutated_working_copy([os.path.join(worktree, changed_src)]):
pants_run = self.run_pants([
'-ldebug', # This ensures the changed target names show up in the pants output.
'--exclude-target-regexp={}'.format(exclude_target_regexp),
'test-changed',
'--changes-since=HEAD',
'--include-dependees=transitive'
])
self.assert_success(pants_run)
for expected_item in expected_set:
self.assertIn(expected_item, pants_run.stdout_data)
for excluded_item in excluded_set:
self.assertNotIn(excluded_item, pants_run.stdout_data)
@ensure_engine
def test_changed_changed_since_and_files(self):
with create_isolated_git_repo():
stdout = self.assert_changed_new_equals_old(['--changed-since=HEAD^^', '--files'])
# The output should be the files added in the last 2 commits.
self.assertEqual(
lines_to_set(stdout),
{'src/python/sources/BUILD',
'src/python/sources/sources.py',
'src/python/sources/sources.txt',
'3rdparty/BUILD'}
)
@ensure_engine
def test_changed_diffspec_and_files(self):
with create_isolated_git_repo():
git_sha = subprocess.check_output(['git', 'rev-parse', 'HEAD^^']).strip()
stdout = self.assert_changed_new_equals_old(['--changed-diffspec={}'.format(git_sha), '--files'])
# The output should be the files added in the last 2 commits.
self.assertEqual(
lines_to_set(stdout),
{'src/python/python_targets/BUILD',
'src/python/python_targets/test_binary.py',
'src/python/python_targets/test_library.py',
'src/python/python_targets/test_unclaimed_src.py'}
)
# Following 4 tests do not run in isolated repo because they don't mutate working copy.
def test_changed(self):
self.assert_changed_new_equals_old([])
def test_changed_with_changes_since(self):
self.assert_changed_new_equals_old(['--changes-since=HEAD^^'])
def test_changed_with_changes_since_direct(self):
self.assert_changed_new_equals_old(['--changes-since=HEAD^^', '--include-dependees=direct'])
def test_changed_with_changes_since_transitive(self):
self.assert_changed_new_equals_old(['--changes-since=HEAD^^', '--include-dependees=transitive'])
ChangedIntegrationTest.generate_tests()
|
This page provides an overview of Magical Drop 3, and provides a small selection of links to places where you can find cheats, game guides, and reviews. If available, you can find many additional cheats, guides, and reviews for Magical Drop 3 by using the tabs above.
We have cheats for Magical Drop 3, and know of 6 other websites with cheats.
|
import sys
import numpy as np
import pandas as pd
from natsort import natsorted
from pyranges.statistics import StatisticsMethods
from pyranges.genomicfeatures import GenomicFeaturesMethods
from pyranges import PyRanges
from pyranges.helpers import single_value_key, get_key_from_df
def set_dtypes(df, int64):
# if extended is None:
# extended = False if df.Start.dtype == np.int32 else True
if not int64:
dtypes = {
"Start": np.int32,
"End": np.int32,
"Chromosome": "category",
"Strand": "category",
}
else:
dtypes = {
"Start": np.int64,
"End": np.int64,
"Chromosome": "category",
"Strand": "category",
}
if "Strand" not in df:
del dtypes["Strand"]
# need to ascertain that object columns do not consist of multiple types
# https://github.com/biocore-ntnu/epic2/issues/32
for column in "Chromosome Strand".split():
if column not in df:
continue
df[column] = df[column].astype(str)
for col, dtype in dtypes.items():
if df[col].dtype.name != dtype:
df[col] = df[col].astype(dtype)
return df
def create_df_dict(df, stranded):
chrs = df.Chromosome.cat.remove_unused_categories()
df["Chromosome"] = chrs
if stranded:
grpby_key = "Chromosome Strand".split()
df["Strand"] = df.Strand.cat.remove_unused_categories()
else:
grpby_key = "Chromosome"
return {k: v for k, v in df.groupby(grpby_key)}
def create_pyranges_df(chromosomes, starts, ends, strands=None):
if isinstance(chromosomes, str) or isinstance(chromosomes, int):
chromosomes = pd.Series([chromosomes] * len(starts), dtype="category")
if strands is not None:
if isinstance(strands, str):
strands = pd.Series([strands] * len(starts), dtype="category")
columns = [chromosomes, starts, ends, strands]
lengths = list(str(len(s)) for s in columns)
assert (
len(set(lengths)) == 1
), "chromosomes, starts, ends and strands must be of equal length. But are {}".format(
", ".join(lengths)
)
colnames = "Chromosome Start End Strand".split()
else:
columns = [chromosomes, starts, ends]
lengths = list(str(len(s)) for s in columns)
assert (
len(set(lengths)) == 1
), "chromosomes, starts and ends must be of equal length. But are {}".format(
", ".join(lengths)
)
colnames = "Chromosome Start End".split()
idx = range(len(starts))
series_to_concat = []
for s in columns:
if isinstance(s, pd.Series):
s = pd.Series(s.values, index=idx)
else:
s = pd.Series(s, index=idx)
series_to_concat.append(s)
df = pd.concat(series_to_concat, axis=1)
df.columns = colnames
return df
def check_strandedness(df):
"""Check whether strand contains '.'"""
if "Strand" not in df:
return False
contains_more_than_plus_minus_in_strand_col = False
if str(df.Strand.dtype) == "category" and (
set(df.Strand.cat.categories) - set("+-")
):
contains_more_than_plus_minus_in_strand_col = True
elif not ((df.Strand == "+") | (df.Strand == "-")).all():
contains_more_than_plus_minus_in_strand_col = True
# if contains_more_than_plus_minus_in_strand_col:
# logging.warning("Strand contained more symbols than '+' or '-'. Not supported (yet) in PyRanges.")
return not contains_more_than_plus_minus_in_strand_col
def _init(
self,
df=None,
chromosomes=None,
starts=None,
ends=None,
strands=None,
int64=False,
copy_df=True,
):
# TODO: add categorize argument with dict of args to categorize?
if isinstance(df, PyRanges):
raise Exception("Object is already a PyRange.")
if isinstance(df, pd.DataFrame):
assert all(
c in df for c in "Chromosome Start End".split()
), "The dataframe does not have all the columns Chromosome, Start and End."
if copy_df:
df = df.copy()
if df is False or df is None:
df = create_pyranges_df(chromosomes, starts, ends, strands)
if isinstance(df, pd.DataFrame):
df = df.reset_index(drop=True)
stranded = check_strandedness(df)
df = set_dtypes(df, int64)
self.__dict__["dfs"] = create_df_dict(df, stranded)
# df is actually dict of dfs
else:
empty_removed = {k: v.copy() for k, v in df.items() if not v.empty}
_single_value_key = True
_key_same = True
_strand_valid = True
_has_strand = True
for key, df in empty_removed.items():
_key = get_key_from_df(df)
_single_value_key = single_value_key(df) and _single_value_key
_key_same = (_key == key) and _key_same
if isinstance(_key, tuple):
_strand_valid = _strand_valid and (_key[1] in ["+", "-"])
else:
_has_strand = False
if not all([_single_value_key, _key_same, _strand_valid]):
df = pd.concat(empty_removed.values()).reset_index(drop=True)
if _has_strand and _strand_valid:
empty_removed = df.groupby(["Chromosome", "Strand"])
else:
empty_removed = df.groupby("Chromosome")
empty_removed = {k: v for (k, v) in empty_removed}
self.__dict__["dfs"] = empty_removed
self.__dict__["features"] = GenomicFeaturesMethods(self)
self.__dict__["stats"] = StatisticsMethods(self)
|
Soy-Free Vegenaise® offers all of the great taste of Original Vegenaise®, now in a soy-free version! It’s smooth, creamy, delicious, and of course, entirely vegan and gluten free! It’s Better Than Mayo™!
|
'''
Harvester for the Digital Commons @ IWU for the SHARE project
Example API call: http://digitalcommons.iwu.edu/do/oai/?verb=ListRecords&metadataPrefix=oai_dc
'''
from __future__ import unicode_literals
from scrapi.base import OAIHarvester
class Iwu_commonsHarvester(OAIHarvester):
short_name = 'iwu_commons'
long_name = 'Digital Commons @ Illinois Wesleyan University'
url = 'http://digitalcommons.iwu.edu'
base_url = 'http://digitalcommons.iwu.edu/do/oai/'
property_list = ['date', 'type', 'source', 'format', 'identifier', 'setSpec']
approved_sets = [
u'oral_hist',
u'ames_award',
u'arthonors_book_gallery',
u'arthonors',
u'bio',
u'music_compositions',
u'cs',
u'constructing',
u'economics',
u'education',
u'ed_studies_posters',
u'eng',
u'envstu',
u'fac_biennial_exhibit_all',
u'fac_biennial_exhibit2011',
u'fac_biennial_exhibit2013',
u'fac_biennial_exhibit',
u'firstyear_summer',
u'founders_day_docs',
u'german',
u'theatre_hist',
u'history',
u'teaching_excellence',
u'honors_docs',
u'honors_programs_docs',
u'physics_honproj',
u'bio_honproj',
u'intstu_honproj',
u'envstu_honproj',
u'russian_honproj',
u'history_honproj',
u'theatre_honproj',
u'religion_honproj',
u'wostu_honproj',
u'nursing_honproj',
u'education_honproj',
u'eng_honproj',
u'french_honproj',
u'math_honproj',
u'socanth_honproj',
u'econ_honproj',
u'art_honproj',
u'cs_honproj',
u'amstudies_honproj',
u'grs_honproj',
u'hispstu_honproj',
u'polisci_honproj',
u'chem_honproj',
u'phil_honproj',
u'acct_fin_honproj',
u'busadmin_honproj',
u'german_honproj',
u'psych_honproj',
u'bookshelf',
u'wglt_interviews',
u'oralhist_2009',
u'oralhist_ucd',
u'oralhist_wesn',
u'italian',
u'japanese',
u'jwprc',
u'math',
u'music',
u'nursing',
u'oralhistory',
u'oralhistory_gallery',
u'anth_ethno',
u'gateway',
u'envstu_seminar',
u'music_outstanding_works',
u'writing_student',
u'polsci',
u'psych',
u'religion',
u'respublica',
u'russian',
u'grs_scholarship',
u'math_scholarship',
u'nursing_scholarship',
u'bio_scholarship',
u'religion_scholarship',
u'mcll_scholarship',
u'envstu_scholarship',
u'physics_scholarship',
u'socanth_scholarship',
u'history_scholarship',
u'intstu_scholarship',
u'cs_scholarship',
u'chem_scholarship',
u'eng_scholarship',
u'hispstu_scholarship',
u'psych_scholarship',
u'socanth',
u'student_prof',
u'sea',
u'parkplace',
u'uer',
u'germanresearch',
u'uauje',
u'univcom',
u'wglt'
]
timezone_granularity = True
|
Tony is a Fellow of the Institute of Chartered Accountants who has served as FD/CFO for businesses in a broad range of sectors including manufacturing, staffing and software both in the UK and in Europe, North America and South East Asia.
Tony is the Founder and Chief Executive of Transpire – The Director Network, a service launched in 2013 to meet the needs of senior executives and high achievers in the public, private or third sector in a carefully tailored and coherent package of advice, training and networking to smooth the transition from a high level executive career to a non-executive portfolio and to offer ongoing support.
Tony has served on private and not-for profit sector Boards since 2009, is a Tutor on the Financial Times NED Diploma Programme and during 2018 chaired over 30 NED & Director CPD & Networking events!
Philip is an award winning CEO/NED who enjoys partnering with innovative founders and investors in ambitious businesses; bringing hands on business experience and dynamic leadership skills to complement and develop the team. Experienced in taking over the CEO role from the founder, building and developing the corporate core knowledge, enabling secure scalable growth; fulfilling the founder’s and stake-holders’ vision.
|
#!/usr/bin/env python
import os
import re
from sys import argv
scripts, css = argv
# filter icon name, ignore alias
def fil_icname(line):
if re.search('^\.fa-.*:before {$', line):
ic_name = re.split("[.:]", line)[1][3:]
return ic_name
def fil_iccode(line):
if re.search('^ content: .*;$', line):
ic_code = re.split("[\"]", line)[1][1:].upper()
return ic_code
# turn icon name to Camel Case
# forked from https://github.com/schischi-a/fontawesome-latex
def camel_case(name):
ret = name.replace('-', ' ')
ret = ret.title()
ret = ret.replace(' ', '')
return ret
def get_icons(fs_css):
icons = []
with open(fs_css, 'r') as fs_fp:
for line in fs_fp:
icon_name = fil_icname(line)
if icon_name is not None:
line = next(fs_fp)
icon_code = fil_iccode(line)
if icon_code is not None:
tex_name = camel_case(icon_name)
icons.append((icon_name, icon_code, tex_name))
return icons
def output_sty(sty, icons):
with open(sty, 'a') as f:
for ic in icons:
prefix = "\expandafter\def\csname faicon@"
ic_name_h = prefix + ic[0] + "\endcsname"
ic_code_tex = "{\symbol{\"" + ic[1] + "}} \\def\\fa" + ic[2]
ic_name_tail = " {{\FA\csname faicon@" + ic[0] + "\endcsname}}\n"
f.write(ic_name_h.ljust(63) + ic_code_tex.ljust(42) + ic_name_tail)
if __name__ == "__main__":
print("output fontawesome.sty...")
icons = get_icons(css)
temp_dir = os.path.dirname(css)
sty = os.path.join(temp_dir, "fontawesome.sty")
output_sty(sty, icons)
|
A Quebec nationalist examination of the historiography of post-Conquest Canada.
Brunet, Michel. “British Conquest: Canadian Social Scientists and the Fate of the Canadians.” The Canadian Historical Review Vol. 40, no. 2 (June 1959): 94–107.
|
from __future__ import print_function, division
import matplotlib
import logging
from sys import stdout
matplotlib.use('Agg') # Must be before importing matplotlib.pyplot or pylab!
from neuralnilm import (Net, RealApplianceSource)
from neuralnilm.source import (standardise, discretize, fdiff, power_and_fdiff,
RandomSegments, RandomSegmentsInMemory,
SameLocation)
from neuralnilm.experiment import run_experiment, init_experiment
from neuralnilm.net import TrainingError
from neuralnilm.layers import (MixtureDensityLayer, DeConv1DLayer,
SharedWeightsDenseLayer)
from neuralnilm.objectives import (scaled_cost, mdn_nll,
scaled_cost_ignore_inactive, ignore_inactive,
scaled_cost3)
from neuralnilm.plot import MDNPlotter, CentralOutputPlotter, Plotter, RectangularOutputPlotter, StartEndMeanPlotter
from neuralnilm.updates import clipped_nesterov_momentum
from neuralnilm.disaggregate import disaggregate
from neuralnilm.rectangulariser import rectangularise
from lasagne.nonlinearities import sigmoid, rectify, tanh, identity, softmax
from lasagne.objectives import squared_error, binary_crossentropy
from lasagne.init import Uniform, Normal
from lasagne.layers import (DenseLayer, Conv1DLayer,
ReshapeLayer, FeaturePoolLayer,
DimshuffleLayer, DropoutLayer, ConcatLayer, PadLayer)
from lasagne.updates import nesterov_momentum, momentum
from functools import partial
import os
import __main__
from copy import deepcopy
from math import sqrt
import numpy as np
import theano.tensor as T
import gc
"""
447: first attempt at disaggregation
"""
NAME = os.path.splitext(os.path.split(__main__.__file__)[1])[0]
#PATH = "/homes/dk3810/workspace/python/neuralnilm/figures"
PATH = "/data/dk3810/figures"
SAVE_PLOT_INTERVAL = 25000
N_SEQ_PER_BATCH = 64
MAX_TARGET_POWER = 300
source_dict = dict(
filename='/data/dk3810/ukdale.h5',
appliances=[
['fridge freezer', 'fridge', 'freezer'],
['washer dryer', 'washing machine'],
'kettle',
'HTPC',
'dish washer'
],
max_appliance_powers=[MAX_TARGET_POWER, 2400, 2400, 200, 2500],
on_power_thresholds=[5] * 5,
min_on_durations=[60, 1800, 30, 60, 1800],
min_off_durations=[12, 600, 1, 12, 1800],
# date finished installing meters in house 1 = 2013-04-12
window=("2013-04-12", "2014-12-10"),
seq_length=512,
output_one_appliance=True,
train_buildings=[1],
validation_buildings=[1],
n_seq_per_batch=N_SEQ_PER_BATCH,
standardise_input=True,
independently_center_inputs=False,
skip_probability=0.75,
# skip_probability_for_first_appliance=0.5,
target_is_start_and_end_and_mean=True,
one_target_per_seq=False
)
net_dict = dict(
save_plot_interval=SAVE_PLOT_INTERVAL,
loss_function=lambda x, t: squared_error(x, t).mean(),
updates_func=nesterov_momentum,
learning_rate=1e-3,
learning_rate_changes_by_iteration={
500000: 1e-4,
600000: 1e-5
},
do_save_activations=True,
auto_reshape=False,
plotter=StartEndMeanPlotter(
n_seq_to_plot=32, max_target_power=MAX_TARGET_POWER)
)
def exp_a(name):
global source
source_dict_copy = deepcopy(source_dict)
source_dict_copy.update(dict(
logger=logging.getLogger(name)
))
source = RealApplianceSource(**source_dict_copy)
net_dict_copy = deepcopy(net_dict)
net_dict_copy.update(dict(
experiment_name=name,
source=source
))
NUM_FILTERS = 16
target_seq_length = source.output_shape_after_processing()[1]
net_dict_copy['layers_config'] = [
{
'type': DimshuffleLayer,
'pattern': (0, 2, 1) # (batch, features, time)
},
{
'type': PadLayer,
'width': 4
},
{
'type': Conv1DLayer, # convolve over the time axis
'num_filters': NUM_FILTERS,
'filter_size': 4,
'stride': 1,
'nonlinearity': None,
'border_mode': 'valid'
},
{
'type': Conv1DLayer, # convolve over the time axis
'num_filters': NUM_FILTERS,
'filter_size': 4,
'stride': 1,
'nonlinearity': None,
'border_mode': 'valid'
},
{
'type': DimshuffleLayer,
'pattern': (0, 2, 1), # back to (batch, time, features)
'label': 'dimshuffle3'
},
{
'type': DenseLayer,
'num_units': 512 * 8,
'nonlinearity': rectify,
'label': 'dense0'
},
{
'type': DenseLayer,
'num_units': 512 * 6,
'nonlinearity': rectify,
'label': 'dense1'
},
{
'type': DenseLayer,
'num_units': 512 * 4,
'nonlinearity': rectify,
'label': 'dense2'
},
{
'type': DenseLayer,
'num_units': 512,
'nonlinearity': rectify
},
{
'type': DenseLayer,
'num_units': target_seq_length,
'nonlinearity': None
}
]
net = Net(**net_dict_copy)
return net
def main():
EXPERIMENTS = list('a')
for experiment in EXPERIMENTS:
full_exp_name = NAME + experiment
func_call = init_experiment(PATH, experiment, full_exp_name)
logger = logging.getLogger(full_exp_name)
try:
net = eval(func_call)
run_experiment(net, epochs=None)
except KeyboardInterrupt:
logger.info("KeyboardInterrupt")
break
except Exception as exception:
logger.exception("Exception")
# raise
finally:
logging.shutdown()
if __name__ == "__main__":
main()
"""
Emacs variables
Local Variables:
compile-command: "cp /home/jack/workspace/python/neuralnilm/scripts/e544.py /mnt/sshfs/imperial/workspace/python/neuralnilm/scripts/"
End:
"""
|
There will be more, a whole lot more to that sophisticated 3D sensor that should come outfitted on the OLED iPhone than originally thought, with a new discovery indicating that Apple’s tenth anniversary phone might suppress notification sounds if you’re looking at it.
Journalist Jason Snell talked about it on the last episode of the Upgrade podcast.
The finding was corroborated by iOS developer Guilherme Rambo, who found some strings in the last week’s inadvertent release of an unredacted build of HomePod’s version of iOS.
This should be technically feasible because Apple’s rumored 3D sensor based on PrimeSense technology is thought to shoot infrared light that’s invisible to the human eye in all directions, using a receiver and time-of-flight calculations to detect objects and build a live 3D mesh.
It’s unclear if the sensor will double as an iris scanner or not, but the OLED iPhone is believed to include both 3D-based facial recognition and eye scanning features with dedicated sensors.
It’s also unclear if this will be a default or optional feature, perhaps enabled on a per-app basis.
That said, a switch in Settings to turn this cool feature on or off would be much appreciated. For example, people who disable banner notifications may want to turn this feature off in order to be alerted with a sound when a new notification comes in.
Are you excited for this particular iPhone 8 feature, and why? If it proves real, is this something that you may want to use on a daily basis or perhaps turn off completely?
|
import os
import json
import datetime
from pylons import c
from ming.orm.ormsession import ThreadLocalORMSession
from allura import model as M
from forgewiki import model as WM
from forgewiki.converters import mediawiki2markdown
from forgewiki.converters import mediawiki_internal_links2markdown
from allura.command import base as allura_base
from allura.lib import helpers as h
from allura.lib import utils
from allura.model.session import artifact_orm_session
class MediawikiLoader(object):
"""Load MediaWiki data from json to Allura wiki tool"""
TIMESTAMP_FMT = '%Y%m%d%H%M%S'
def __init__(self, options):
self.options = options
self.nbhd = M.Neighborhood.query.get(name=options.nbhd)
if not self.nbhd:
allura_base.log.error("Can't find neighborhood with name %s"
% options.nbhd)
exit(2)
self.project = M.Project.query.get(shortname=options.project,
neighborhood_id=self.nbhd._id)
if not self.project:
allura_base.log.error("Can't find project with shortname %s "
"and neighborhood_id %s"
% (options.project, self.nbhd._id))
exit(2)
self.wiki = self.project.app_instance('wiki')
if not self.wiki:
allura_base.log.error("Can't find wiki app in given project")
exit(2)
h.set_context(self.project.shortname, 'wiki', neighborhood=self.nbhd)
self.project.notifications_disabled = True
def exit(self, status):
self.project.notifications_disabled = False
ThreadLocalORMSession.flush_all()
ThreadLocalORMSession.close_all()
exit(status)
def load(self):
artifact_orm_session._get().skip_mod_date = True
self.load_pages()
self.project.notifications_disabled = False
artifact_orm_session._get().skip_mod_date = False
ThreadLocalORMSession.flush_all()
ThreadLocalORMSession.close_all()
allura_base.log.info('Loading wiki done')
def _pages(self):
"""Yield path to page dump directory for next wiki page"""
pages_dir = os.path.join(self.options.dump_dir, 'pages')
pages = []
if not os.path.isdir(pages_dir):
return
pages = os.listdir(pages_dir)
for directory in pages:
dir_path = os.path.join(pages_dir, directory)
if os.path.isdir(dir_path):
yield dir_path
def _history(self, page_dir):
"""Yield page_data for next wiki page in edit history"""
page_dir = os.path.join(page_dir, 'history')
if not os.path.isdir(page_dir):
return
pages = os.listdir(page_dir)
pages.sort() # ensure that history in right order
for page in pages:
fn = os.path.join(page_dir, page)
try:
with open(fn, 'r') as pages_file:
page_data = json.load(pages_file)
except IOError, e:
allura_base.log.error("Can't open file: %s" % str(e))
self.exit(2)
except ValueError, e:
allura_base.log.error("Can't load data from file %s: %s"
% (fn, str(e)))
self.exit(2)
yield page_data
def _talk(self, page_dir):
"""Return talk data from json dump"""
filename = os.path.join(page_dir, 'discussion.json')
if not os.path.isfile(filename):
return
try:
with open(filename, 'r') as talk_file:
talk_data = json.load(talk_file)
except IOError, e:
allura_base.log.error("Can't open file: %s" % str(e))
self.exit(2)
except ValueError, e:
allura_base.log.error("Can't load data from file %s: %s"
% (filename, str(e)))
self.exit(2)
return talk_data
def _attachments(self, page_dir):
"""Yield (filename, full path) to next attachment for given page."""
attachments_dir = os.path.join(page_dir, 'attachments')
if not os.path.isdir(attachments_dir):
return
attachments = os.listdir(attachments_dir)
for filename in attachments:
yield filename, os.path.join(attachments_dir, filename)
def load_pages(self):
"""Load pages with edit history from json to Allura wiki tool"""
allura_base.log.info('Loading pages into allura...')
for page_dir in self._pages():
for page in self._history(page_dir):
p = WM.Page.upsert(page['title'])
p.viewable_by = ['all']
p.text = mediawiki_internal_links2markdown(
mediawiki2markdown(page['text']),
page['title'])
timestamp = datetime.datetime.strptime(page['timestamp'],
self.TIMESTAMP_FMT)
p.mod_date = timestamp
c.user = (M.User.query.get(username=page['username'].lower())
or M.User.anonymous())
ss = p.commit()
ss.mod_date = ss.timestamp = timestamp
# set home to main page
if page['title'] == 'Main_Page':
gl = WM.Globals.query.get(app_config_id=self.wiki.config._id)
if gl is not None:
gl.root = page['title']
allura_base.log.info('Loaded history of page %s (%s)'
% (page['page_id'], page['title']))
self.load_talk(page_dir, page['title'])
self.load_attachments(page_dir, page['title'])
def load_talk(self, page_dir, page_title):
"""Load talk for page.
page_dir - path to directory with page dump.
page_title - page title in Allura Wiki
"""
talk_data = self._talk(page_dir)
if not talk_data:
return
text = mediawiki2markdown(talk_data['text'])
page = WM.Page.query.get(app_config_id=self.wiki.config._id,
title=page_title)
if not page:
return
thread = M.Thread.query.get(ref_id=page.index_id())
if not thread:
return
timestamp = datetime.datetime.strptime(talk_data['timestamp'],
self.TIMESTAMP_FMT)
c.user = (M.User.query.get(username=talk_data['username'].lower())
or M.User.anonymous())
thread.add_post(
text=text,
discussion_id=thread.discussion_id,
thread_id=thread._id,
timestamp=timestamp,
ignore_security=True)
allura_base.log.info('Loaded talk for page %s' % page_title)
def load_attachments(self, page_dir, page_title):
"""Load attachments for page.
page_dir - path to directory with page dump.
"""
page = WM.Page.query.get(app_config_id=self.wiki.config._id,
title=page_title)
for filename, path in self._attachments(page_dir):
try:
with open(path) as fp:
page.attach(filename, fp,
content_type=utils.guess_mime_type(filename))
except IOError, e:
allura_base.log.error("Can't open file: %s" % str(e))
self.exit(2)
allura_base.log.info('Loaded attachments for page %s.' % page_title)
|
If you are a NEW USER then request/offer journeys to Belfast or from Belfast.
If you are a REGISTERED USER then request/offer journeys to Belfast or from Belfast.
Very rough Map of Belfast. Make sure you check Belfast for accurate location.
|
# -*- coding: utf-8 -*-
'''
Created on 2 Sep 2014
@author: Éric Piel
Copyright © 2014 Éric Piel, Delmic
This file is part of Odemis.
Odemis is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License version 2 as published by the Free Software Foundation.
Odemis is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
You should have received a copy of the GNU General Public License along with Odemis. If not, see http://www.gnu.org/licenses/.
'''
# Very rough converter to simple 8-bit PNG
from __future__ import division
import logging
from odemis import model
from odemis.util import img
import os
import scipy
FORMAT = "PNG"
# list of file-name extensions possible, the first one is the default when saving a file
EXTENSIONS = [u".png"]
# TODO: support 16-bits? But then it looses the point to have a "simple" format?
LOSSY = True # because it doesn't support 16 bits
def _saveAsPNG(filename, data):
# TODO: store metadata
# TODO: support RGB
if data.metadata.get(model.MD_DIMS) == 'YXC':
rgb8 = data
else:
data = img.ensure2DImage(data)
# TODO: it currently fails with large data, use gdal instead?
# tempdriver = gdal.GetDriverByName('MEM')
# tmp = tempdriver.Create('', rgb8.shape[1], rgb8.shape[0], 1, gdal.GDT_Byte)
# tiledriver = gdal.GetDriverByName("png")
# tmp.GetRasterBand(1).WriteArray(rgb8[:, :, 0])
# tiledriver.CreateCopy("testgdal.png", tmp, strict=0)
# TODO: support greyscale png?
# TODO: skip if already 8 bits
# Convert to 8 bit RGB
hist, edges = img.histogram(data)
irange = img.findOptimalRange(hist, edges, 1 / 256)
rgb8 = img.DataArray2RGB(data, irange)
# save to file
scipy.misc.imsave(filename, rgb8)
def export(filename, data, thumbnail=None):
'''
Write a PNG file with the given image
filename (unicode): filename of the file to create (including path). If more
than one data is passed, a number will be appended.
data (list of model.DataArray, or model.DataArray): the data to export.
Metadata is taken directly from the DA object. If it's a list, a multiple
page file is created. It must have 5 dimensions in this order: Channel,
Time, Z, Y, X. However, all the first dimensions of size 1 can be omitted
(ex: an array of 111YX can be given just as YX, but RGB images are 311YX,
so must always be 5 dimensions).
thumbnail (None or numpy.array): Image used as thumbnail for the file. Can be of any
(reasonable) size. Must be either 2D array (greyscale) or 3D with last
dimension of length 3 (RGB). As png doesn't support it, it will
be dropped silently.
'''
if thumbnail is not None:
logging.info("Dropping thumbnail, not supported in PNG")
if isinstance(data, list):
if len(data) > 1:
# Name the files aaa-XXX.png
base, ext = os.path.splitext(filename)
for i, d in enumerate(data):
fn = "%s-%03d%s" % (base, i, ext)
_saveAsPNG(fn, d)
else:
_saveAsPNG(filename, data[0])
else:
_saveAsPNG(filename, data)
|
Presently there are no Payday Advances listed in Mount Airy, New Jersey.
Ten Recommendations to always remember whenever looking for a payday Loan in Mount Airy New Jersey.
Pay day advances oftentimes have a great deal of small print at the bottom of the contract. Be sure you go through and comprehend everything written before you sign.
Not sufficient funds fee and even bounced check fees may add up rapidly and may be quite high, so be very careful not to overshoot your budget plan when obtaining a payday advance.
As agonizing as it may be to ask a good friend or relative for cash , it can come out a lot better than obtaining a payday advance . If that is not an alternative, try obtaining a credit card or any credit line before a short term advance , frequently times the fees on these are much lower than what you'll repay on a cash advance.
To guarantee that you pay off your advance promptly, make sure that you understand when the payday advance loan or cash loan is scheduled to be paid and make the vital actions to be sure it's repaid.
Make sure you make an effort and settle up your payday loan in full when it is due without going past the due date.
Do a integrity check on the business you are looking into employing for the payday advance loan assistance. You may do this by checking the Better Business Bureau or other Rating providers.
If you think that you've been addressed wrongly or unlawfully by any particular payday loan or cash advance company, you can easily enter a grievance with your state bureau.
In the case that you have no idea how you can budget and save your funds, then you might just would like to debt counselling in order to reduce the necessity for payday advance loans down the road.
In order to avoid having to get payday advances down the road, get under way keeping an urgent situation fund of about $500.
If you plan to obtain a payday advance loan or cash loan, ensure that you get accessibility to your current work pay-stubs in addition to your bank account info.
|
"""
Support for Songpal-enabled (Sony) media devices.
For more details about this platform, please refer to the documentation at
https://home-assistant.io/components/media_player.songpal/
"""
import logging
import voluptuous as vol
from homeassistant.components.media_player import (
DOMAIN, PLATFORM_SCHEMA, SUPPORT_SELECT_SOURCE, SUPPORT_TURN_OFF,
SUPPORT_TURN_ON, SUPPORT_VOLUME_MUTE, SUPPORT_VOLUME_SET,
SUPPORT_VOLUME_STEP, MediaPlayerDevice)
from homeassistant.const import ATTR_ENTITY_ID, CONF_NAME, STATE_OFF, STATE_ON
from homeassistant.exceptions import PlatformNotReady
import homeassistant.helpers.config_validation as cv
REQUIREMENTS = ['python-songpal==0.0.8']
_LOGGER = logging.getLogger(__name__)
CONF_ENDPOINT = 'endpoint'
PARAM_NAME = 'name'
PARAM_VALUE = 'value'
PLATFORM = 'songpal'
SET_SOUND_SETTING = 'songpal_set_sound_setting'
SUPPORT_SONGPAL = SUPPORT_VOLUME_SET | SUPPORT_VOLUME_STEP | \
SUPPORT_VOLUME_MUTE | SUPPORT_SELECT_SOURCE | \
SUPPORT_TURN_ON | SUPPORT_TURN_OFF
PLATFORM_SCHEMA = PLATFORM_SCHEMA.extend({
vol.Optional(CONF_NAME): cv.string,
vol.Required(CONF_ENDPOINT): cv.string,
})
SET_SOUND_SCHEMA = vol.Schema({
vol.Optional(ATTR_ENTITY_ID): cv.entity_id,
vol.Required(PARAM_NAME): cv.string,
vol.Required(PARAM_VALUE): cv.string,
})
async def async_setup_platform(
hass, config, async_add_entities, discovery_info=None):
"""Set up the Songpal platform."""
from songpal import SongpalException
if PLATFORM not in hass.data:
hass.data[PLATFORM] = {}
if discovery_info is not None:
name = discovery_info["name"]
endpoint = discovery_info["properties"]["endpoint"]
_LOGGER.debug("Got autodiscovered %s - endpoint: %s", name, endpoint)
device = SongpalDevice(name, endpoint)
else:
name = config.get(CONF_NAME)
endpoint = config.get(CONF_ENDPOINT)
device = SongpalDevice(name, endpoint)
try:
await device.initialize()
except SongpalException as ex:
_LOGGER.error("Unable to get methods from songpal: %s", ex)
raise PlatformNotReady
hass.data[PLATFORM][endpoint] = device
async_add_entities([device], True)
async def async_service_handler(service):
"""Service handler."""
entity_id = service.data.get("entity_id", None)
params = {key: value for key, value in service.data.items()
if key != ATTR_ENTITY_ID}
for device in hass.data[PLATFORM].values():
if device.entity_id == entity_id or entity_id is None:
_LOGGER.debug("Calling %s (entity: %s) with params %s",
service, entity_id, params)
await device.async_set_sound_setting(
params[PARAM_NAME], params[PARAM_VALUE])
hass.services.async_register(
DOMAIN, SET_SOUND_SETTING, async_service_handler,
schema=SET_SOUND_SCHEMA)
class SongpalDevice(MediaPlayerDevice):
"""Class representing a Songpal device."""
def __init__(self, name, endpoint):
"""Init."""
import songpal
self._name = name
self.endpoint = endpoint
self.dev = songpal.Device(self.endpoint)
self._sysinfo = None
self._state = False
self._available = False
self._initialized = False
self._volume_control = None
self._volume_min = 0
self._volume_max = 1
self._volume = 0
self._is_muted = False
self._sources = []
async def initialize(self):
"""Initialize the device."""
await self.dev.get_supported_methods()
self._sysinfo = await self.dev.get_system_info()
@property
def name(self):
"""Return name of the device."""
return self._name
@property
def unique_id(self):
"""Return a unique ID."""
return self._sysinfo.macAddr
@property
def available(self):
"""Return availability of the device."""
return self._available
async def async_set_sound_setting(self, name, value):
"""Change a setting on the device."""
await self.dev.set_sound_settings(name, value)
async def async_update(self):
"""Fetch updates from the device."""
from songpal import SongpalException
try:
volumes = await self.dev.get_volume_information()
if not volumes:
_LOGGER.error("Got no volume controls, bailing out")
self._available = False
return
if len(volumes) > 1:
_LOGGER.debug(
"Got %s volume controls, using the first one", volumes)
volume = volumes[0]
_LOGGER.debug("Current volume: %s", volume)
self._volume_max = volume.maxVolume
self._volume_min = volume.minVolume
self._volume = volume.volume
self._volume_control = volume
self._is_muted = self._volume_control.is_muted
status = await self.dev.get_power()
self._state = status.status
_LOGGER.debug("Got state: %s", status)
inputs = await self.dev.get_inputs()
_LOGGER.debug("Got ins: %s", inputs)
self._sources = inputs
self._available = True
except SongpalException as ex:
# if we were available, print out the exception
if self._available:
_LOGGER.error("Got an exception: %s", ex)
self._available = False
async def async_select_source(self, source):
"""Select source."""
for out in self._sources:
if out.title == source:
await out.activate()
return
_LOGGER.error("Unable to find output: %s", source)
@property
def source_list(self):
"""Return list of available sources."""
return [x.title for x in self._sources]
@property
def state(self):
"""Return current state."""
if self._state:
return STATE_ON
return STATE_OFF
@property
def source(self):
"""Return currently active source."""
for out in self._sources:
if out.active:
return out.title
return None
@property
def volume_level(self):
"""Return volume level."""
volume = self._volume / self._volume_max
return volume
async def async_set_volume_level(self, volume):
"""Set volume level."""
volume = int(volume * self._volume_max)
_LOGGER.debug("Setting volume to %s", volume)
return await self._volume_control.set_volume(volume)
async def async_volume_up(self):
"""Set volume up."""
return await self._volume_control.set_volume("+1")
async def async_volume_down(self):
"""Set volume down."""
return await self._volume_control.set_volume("-1")
async def async_turn_on(self):
"""Turn the device on."""
return await self.dev.set_power(True)
async def async_turn_off(self):
"""Turn the device off."""
return await self.dev.set_power(False)
async def async_mute_volume(self, mute):
"""Mute or unmute the device."""
_LOGGER.debug("Set mute: %s", mute)
return await self._volume_control.set_mute(mute)
@property
def is_volume_muted(self):
"""Return whether the device is muted."""
return self._is_muted
@property
def supported_features(self):
"""Return supported features."""
return SUPPORT_SONGPAL
|
Iron offers our customers reverse logistics services. For products we build or are built by others. What reverse logistics refers to is the taking back, repairing, refurbishment, upgrading, or disposing of a product and/or part.
We have a Customer Service and Support team that is responsible for overseeing our reverse logistics services. Their responsibilities include: managing warranty and repair services as defined by our Quality Assurance and ISO 9000:2008, along with managing service requests by streaming them through our Depot Center or Nationwide On-Site Service support.
Through these reverse logistics services your customers will one, be assured that their problems will always be dealt with promptly, two, will always receive the quality product they were promised and three, will be empowered with the flexibility, predictability and reliability in products they deserve to have. In reference to the disposal of product/material take a look at our Environmental Policy.
Our Reverse Logistics Services relieve you of the headaches that come with post-manufacturing services and provide you with more satisfied customers at a cost-efficient price. This will allow your organization to increase profit margins while achieving pro-longed success.
|
# Time: O(1)
# Space: O(1)
# Given a time represented in the format "HH:MM",
# form the next closest time by reusing the current digits.
# There is no limit on how many times a digit can be reused.
#
# You may assume the given input string is always valid.
# For example, "01:34", "12:09" are all valid. "1:34", "12:9" are all invalid.
#
# Example 1:
#
# Input: "19:34"
# Output: "19:39"
# Explanation: The next closest time choosing from digits 1, 9, 3, 4, is 19:39, which occurs 5 minutes later.
# It is not 19:33, because this occurs 23 hours and 59 minutes later.
#
# Example 2:
#
# Input: "23:59"
# Output: "22:22"
# Explanation: The next closest time choosing from digits 2, 3, 5, 9, is 22:22.
# It may be assumed that the returned time is next day's time since it is smaller than the input time numerically.
class Solution(object):
def nextClosestTime(self, time):
"""
:type time: str
:rtype: str
"""
h, m = time.split(":")
curr = int(h) * 60 + int(m)
result = None
for i in xrange(curr+1, curr+1441):
t = i % 1440
h, m = t // 60, t % 60
result = "%02d:%02d" % (h, m)
if set(result) <= set(time):
break
return result
|
Advances in Imaging and Electron Physics merges long-running serials-Advances in Electronics and Electron Physics and Advances in Optical and Electron Microscopy. This sequence positive aspects prolonged articles at the physics of electron units (especially semiconductor devices), particle optics at low and high energies, microlithography, snapshot technology and electronic picture processing, electromagnetic wave propagation, electron microscopy, and the computing tools utilized in a lot of these domains.
Detection thought is an advent to 1 of crucial instruments for research of information the place offerings needs to be made and function isn't really excellent. initially constructed for review of digital detection, detection thought used to be followed through psychologists with a view to comprehend sensory choice making, then embraced by way of scholars of human reminiscence.
The subjects contain bonding-based fabrication equipment of silicon-on-insulator, photonic crystals, VCSELs, SiGe-based FETs, MEMS including hybrid integration and laser lift-off. The non-specialist will know about the fundamentals of wafer bonding and its a variety of software components, whereas the researcher within the box will locate up to date information regarding this fast-moving region, together with correct patent details.
"Thin-film microoptics" stands for novel different types of microoptical elements and platforms which mix the well known gains of miniaturized optical components with the explicit merits of skinny optical layers. This strategy permits for leading edge suggestions in shaping mild fields in spatial, temporal and spectral area.
During this quantity, six assessment articles which hide a vast diversity of subject matters of present curiosity in sleek optics are integrated. the 1st article through S. Saltiel, A. A. Sukhorukov and Y. S. Kivshar offers an outline of varied varieties of parametric interactions in nonlinear optics that are linked to simultaneous phase-matching of a number of optical techniques in quadratic non-linear media, the so-called multi-step parametric interactions.
|
from __future__ import with_statement
from os import listdir
from os.path import join as join_paths, basename, isdir, isfile
from plistlib import readPlist
from biplist import readPlist as readBinaryPlist
from .conf import BACKUPS_PATH, BACKUP_DEFAULT_SETTINGS
from iosfu import utils
class BackupManager(object):
# Path to backups
path = None
# Backups loaded
backups = {}
def __init__(self, path=BACKUPS_PATH):
self.path = path
def lookup(self):
"""
Look for backup folders on PATH
"""
folders = listdir(self.path)
for dirname in folders:
path = join_paths(self.path, dirname)
if isdir(path):
backup = Backup(path)
self.backups[backup.id] = backup
def get(self, backup_id):
if backup_id in self.backups and self.backups[backup_id].valid:
return self.backups[backup_id]
else:
raise Exception('Backup not registered')
class Backup(object):
"""
Backup object
"""
# Backup id
id = None
# Backup path
path = None
# Files
files = []
# bool if its valid -> self.init_check()
valid = True
# Required files to mark as valid
_required_files = [
'Info.plist', 'Manifest.mbdb', 'Manifest.plist', 'Status.plist'
]
# File handlers to call methods
_file_handlers = {
'.plist': '_read_plist'
}
_plist = {}
# Data
_data_file = None
_data = {}
def __init__(self, path):
self.path = path
self.get_info()
self._data_file = self.get_data_file()
self.init_check()
self.read_data_file()
@property
def name(self):
name = self.data('name') or self.id
return name
def get_data_file(self):
return "{}.iosfu".format(self.path)
def read_data_file(self):
try:
handler = open(self._data_file)
except (OSError, IOError):
# Create default config file if non-existant
handler = open(self._data_file, 'w+')
handler.write(utils.serialize(BACKUP_DEFAULT_SETTINGS))
handler.seek(0)
finally:
with handler as f:
data_file = f.read()
self._data = utils.deserialize(data_file)
handler.close()
def get_info(self):
"""
Get all the basic info for the backup
"""
self.id = basename(self.path)
# Check all files
for filename in listdir(self.path):
if isfile(join_paths(self.path, filename)):
self.files.append(filename)
# Check handlers
for match in self._file_handlers.keys():
if match in filename:
handler = getattr(self, self._file_handlers[match])
handler(filename)
def init_check(self):
"""
Check if the needed stuff are there to consider this a backup
"""
for required_file in self._required_files:
# Check if required files are there
# FIXME Sometimes it doesn't work :?
if required_file not in self.files:
self.valid = False
def exists(self, filename):
"""
Check if the given file exists
"""
return filename in self.files
def get_file(self, filename, handler=False):
"""
Returns given file path
- handler (bool) - Returns handler instead of path
"""
result = None
if self.exists(filename):
file_path = join_paths(self.path, filename)
if handler:
result = open(file_path, 'rb')
else:
result = file_path
return result
#
# File handlers
#
def _read_plist(self, filename):
"""
Handler for .plist files
Reads them and stores on self._plist for plugin access
"""
file_path = self.get_file(filename)
try:
self._plist[filename] = readPlist(file_path)
except:
# Is binaryPlist?
try:
self._plist[filename] = readBinaryPlist(file_path)
except:
# What is it?
pass
#
# Backup data file
#
def data(self, key, value=None):
result = value
if value:
self._data[key] = value
elif key in self._data:
result = self._data[key]
return result
def cache(self, key, value=None):
result = value
if value:
self._data['cache'][key] = value
elif key in self._data['cache']:
result = self._data['cache'][key]
return result
def clear_cache(self):
self._data['cache'] = {}
self.write_data_file()
def write_data_file(self):
handler = open(self._data_file, 'w+')
handler.write(utils.serialize(self._data))
handler.close()
|
In both informal surveys and more rigorous data samples, corporations today report their greatest problems arise from internal issues. In particular, three challenges — alignment, systems and cooperation — top the list of a number of executives across multiple industries. In contrast, decision making, technical problems, innovation and unmotivated employees are far down that same list.
In this Age of Innovation, organizations are confronted with the need to simultaneously manage stability and flexibility, and they do so by strengthening their coordination and collaboration systems. In other words, as complexity increases in today’s workplaces so does the need for greater coordination and collaboration. Coordination systems enable everyone in the organization to identify, understand and act on relevant priorities without attempting to be everything to everybody. Building the shared understanding, agreements, and actions necessary for success are where the collaboration systems become instrumental.
In my own work of facilitating collaborative organizational design or redesign, what is fundamental is laying the groundwork for what I call Deep Collaboration — everyone designs, decides and leads where appropriate. The required groundwork involves the whole organization deciding what governing principles, shared purpose and shared values will be their guiding lights. All present and future decisions (strategic and operational) are then bound by these parameters. In the case where “people, planet and prosperity” emerges as part of those parameters, forthcoming initiatives must bring favourable and equitable outcomes for all three Ps — a win-win-win.
In order for people to master the complexity of contemporary organizational life, everyone directly affected by the decisions and actions taken must be directly involved in formulating them. The complexity required to do this overwhelms most traditional leaders. However, when people are given responsibility for engineering their own success as well as that of the whole collective, they not only master what is required but also are highly committed in their efforts. In contrast, how can a CEO and executive suite of leaders know the individual priorities of all their people and then align these within the organization in order to ensure that what is good for all is actually achieved?
The Innovation Age has evolved not only the complexity of work but also the nature of work from routine and linear — producing widgets — into knowledge advancement. As a work output, knowledge advancement emerges from the shared understandings, agreements and enhanced perspectives of deep collaboration with others. Primarily, deep collaboration complements human intelligence and diversity rather than supplanting both with a technology-based collaboration platform wherein people often become the arm of the technology rather than the reverse being true.
Deep collaboration necessitates participants willing hold many points of view in their search for knowledge and value creation. Influencing or coercing outcomes to move in one direction or another is not considered additive. Thus, “power flexing” serves no purpose, and frankly is a skill that has no value in today’s workplace.
I suspect the significance of this last statement is obvious in that organizations traditionally have been structured around power/authority and accountability. It must then be no surprise why the surveyed executives, mentioned at the beginning of this blog, see cooperation as a major challenge.
Power-flexing does not support a cooperative culture where everyone needs to work together. However, collaborative design by those directly impacted does indeed support and bring about alignment on priorities and well-designed coordination systems. Furthermore, the design work of organizations needs to ensure their structures and systems no longer reflect power-based accountabilities.
For more on human-centred design and humane workplaces, click here.
|
import errno
import json
import math
import os
import time
from collections import defaultdict
from csv import DictWriter
import warnings
import numpy
from progress.bar import Bar
from django.conf import settings
from django.core.management import BaseCommand, CommandError
from django.contrib.gis.db.models.functions import Area
from rasterio.features import rasterize
from seedsource_core.django.seedsource.models import SeedZone, Region, ZoneSource
from ..constants import PERIODS, VARIABLES
from ..utils import get_regions_for_zone, calculate_pixel_area, generate_missing_bands
from ..dataset import (
ElevationDataset,
ClimateDatasets,
)
from ..statswriter import StatsWriters
from ..zoneconfig import ZoneConfig
class Command(BaseCommand):
help = "Export seed zone statistics and sample data"
def add_arguments(self, parser):
parser.add_argument("output_directory", nargs=1, type=str)
parser.add_argument(
"--zones",
dest="zoneset",
default=None,
help="Comma delimited list of zones sets to analyze. (default is to analyze all available zone sets)",
)
parser.add_argument(
"--variables",
dest="variables",
default=None,
help="Comma delimited list of variables analyze. (default is to analyze all available variables: {})".format(
",".join(VARIABLES)
),
)
parser.add_argument(
"--periods",
dest="periods",
default=None,
help="Comma delimited list of time periods analyze. (default is to analyze all available time periods: {})".format(
",".join(PERIODS)
),
)
parser.add_argument(
"--seed",
dest="seed",
default=None,
help="Seed for random number generator, to reproduce previous random samples",
type=int,
)
def _write_sample(self, output_directory, variable, id, zone_id, data, low, high):
sample = data.copy()
numpy.random.shuffle(sample)
sample = sample[:1000]
filename = "{}_{}_{}.txt".format(id, low, high)
with open(os.path.join(output_directory, "{}_samples".format(variable), filename), "w") as f:
f.write(",".join(str(x) for x in sample))
f.write(os.linesep)
def handle(self, output_directory, zoneset, variables, periods, seed, *args, **kwargs):
output_directory = output_directory[0]
if zoneset is None or zoneset.strip() == "":
sources = ZoneSource.objects.all().order_by("name")
if len(sources) == 0:
raise CommandError("No zonesets available to analyze")
else:
sources = ZoneSource.objects.filter(name__in=zoneset.split(",")).order_by("name")
if len(sources) == 0:
raise CommandError("No zonesets available to analyze that match --zones values")
if variables is None:
variables = VARIABLES
else:
variables = set(variables.split(","))
missing = variables.difference(VARIABLES)
if missing:
raise CommandError("These variables are not available: {}".format(",".join(missing)))
if periods is None:
periods = PERIODS
else:
periods = set(periods.split(","))
missing = periods.difference(PERIODS)
if missing:
raise CommandError("These periods are not available: {}".format(",".join(missing)))
### Initialize random seed
if seed is None:
seed = int(time.time())
print("Using random seed: {}".format(seed))
numpy.random.seed(seed)
### Create output directories
if not os.path.exists(output_directory):
os.makedirs(output_directory)
for period in periods:
print("----------------------\nProcessing period {}\n".format(period))
period_dir = os.path.join(output_directory, period)
for variable in variables:
sample_dir = os.path.join(period_dir, "{}_samples".format(variable))
if not os.path.exists(sample_dir):
os.makedirs(sample_dir)
with StatsWriters(period_dir, variables) as writer:
for source in sources:
zones = source.seedzone_set.annotate(area_meters=Area("polygon")).all().order_by("zone_id")
with ZoneConfig(source.name) as config, ElevationDataset() as elevation_ds, ClimateDatasets(
period=period, variables=variables
) as climate:
for zone in Bar(
"Processing {} zones".format(source.name), max=source.seedzone_set.count(),
).iter(zones):
# calculate area of zone polygon in acres
poly_acres = round(zone.area_meters.sq_m * 0.000247105, 1)
zone_xmin, zone_ymin, zone_xmax, zone_ymax = zone.polygon.extent
zone_ctr_x = round(((zone_xmax - zone_xmin) / 2) + zone_xmin, 5)
zone_ctr_y = round(((zone_ymax - zone_ymin) / 2) + zone_ymin, 5)
region = get_regions_for_zone(zone)
elevation_ds.load_region(region.name)
climate.load_region(region.name)
window, coords = elevation_ds.get_read_window(zone.polygon.extent)
transform = coords.affine
elevation = elevation_ds.data[window]
# calculate pixel area based on UTM centered on window
pixel_area = round(
calculate_pixel_area(transform, elevation.shape[1], elevation.shape[0]) * 0.000247105,
1,
)
zone_mask = rasterize(
(json.loads(zone.polygon.geojson),),
out_shape=elevation.shape,
transform=transform,
fill=1, # mask is True OUTSIDE the zone
default_value=0,
dtype=numpy.dtype("uint8"),
).astype("bool")
# count rasterized pixels
raster_pixels = (zone_mask == 0).sum()
nodata_mask = elevation == elevation_ds.nodata_value
mask = nodata_mask | zone_mask
# extract all data not masked out as nodata or outside zone
# convert to feet
elevation = (elevation[~mask] / 0.3048).round().astype("int")
# if there are no pixels in the mask, skip this zone
if elevation.size == 0:
continue
min_elevation = math.floor(numpy.nanmin(elevation))
max_elevation = math.ceil(numpy.nanmax(elevation))
bands = list(config.get_elevation_bands(zone, min_elevation, max_elevation))
bands = generate_missing_bands(bands, min_elevation, max_elevation)
if not bands:
# min / max elevation outside defined bands
raise ValueError(
"Elevation range {} - {} ft outside defined bands\n".format(
min_elevation, max_elevation
)
)
### Extract data for each variable within each band
for variable, ds in climate.items():
# extract data with same shape as elevation above
data = ds.data[window][~mask]
# count the non-masked data pixels
# variables may be masked even if elevation is valid
zone_unit_pixels = data[data != ds.nodata_value].size
for band in bands:
low, high = band[:2]
band_mask = (elevation >= low) & (elevation <= high)
if not numpy.any(band_mask):
continue
# extract actual elevation range within the mask as integer feet
band_elevation = elevation[band_mask]
band_range = [
math.floor(numpy.nanmin(band_elevation)),
math.ceil(numpy.nanmax(band_elevation)),
]
# extract data within elevation range
band_data = data[band_mask]
# then apply variable's nodata mask
band_data = band_data[band_data != ds.nodata_value]
if not band_data.size:
continue
writer.write_row(
variable,
zone.zone_uid,
band,
band_range,
band_data,
period=period,
zone_set=zone.source,
species=zone.species.upper() if zone.species != "generic" else zone.species,
zone_unit=zone.zone_id,
zone_unit_poly_acres=poly_acres,
zone_unit_raster_pixels=raster_pixels,
zone_unit_raster_acres=raster_pixels * pixel_area,
zone_unit_pixels=zone_unit_pixels,
zone_unit_acres=zone_unit_pixels * pixel_area,
zone_unit_low=min_elevation,
zone_unit_high=max_elevation,
zone_pixels=band_data.size,
zone_acres=band_data.size * pixel_area,
zone_unit_ctr_x=zone_ctr_x,
zone_unit_ctr_y=zone_ctr_y,
zone_unit_xmin=round(zone_xmin, 5),
zone_unit_ymin=round(zone_ymin, 5),
zone_unit_xmax=round(zone_xmax, 5),
zone_unit_ymax=round(zone_ymax, 5),
)
self._write_sample(
period_dir, variable, zone.zone_uid, zone.zone_id, band_data, *band_range
)
|
50 superfoods per scoop from Orgain's original organic super sprouts, fruits, berries, veggie, greens, grasses, and foods blends. 1 billion clinically proven probiotics, 6 grams of organic dietary fiber, high in antioxidants.
Supplement your diet with superfoods, antioxidants, and probiotics to help support digestive and gut health. Superfoods in this powder include beets, ginger, turmeric, wheatgrass, barley grass, spinach, broccoli, kale, acai, millet, amaranth, buckwheat, quinoa, chia, and flax.
Ideal for healthy, on-the-go nourishment for men, women, and kids. Whether you are a busy professional, mom, dad, athlete or student, this is perfect as an antioxidant boost and an addition to your meal replacement shakes or pre and post workout drinks. Mix into water, juice, or your favorite smoothie recipe and enjoy! Great for overnight oats.
If you have any questions about this product by Orgain, contact us by completing and submitting the form below. If you are looking for a specif part number, please include it with your message.
|
# -*- coding: utf-8 -*-
##############################################################################
#
# OpenERP, Open Source Management Solution
# Copyright (C) 2016 - now Bytebrand Outsourcing AG (<http://www.bytebrand.net>).
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU Lesser General Public License as
# published by the Free Software Foundation, either version 3 of the
# License, or (at your option) any later version.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU Affero General Public License for more details.
#
# You should have received a copy of the GNU Lesser General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
#
##############################################################################
import datetime as dtime
from datetime import datetime, timedelta
from odoo import api, fields, models, _
from odoo.exceptions import ValidationError
import pytz
class ResourceCalendar(models.Model):
_inherit = 'resource.calendar'
@api.multi
def get_working_intervals_of_day(self, start_dt=None,
end_dt=None, leaves=None,
compute_leaves=False, resource_id=None,
default_interval=None):
work_limits = []
if start_dt is None and end_dt is not None:
start_dt = end_dt.replace(hour=0, minute=0, second=0)
elif start_dt is None:
start_dt = datetime.now().replace(hour=0, minute=0, second=0)
else:
work_limits.append((start_dt.replace(
hour=0, minute=0, second=0), start_dt))
if end_dt is None:
end_dt = start_dt.replace(hour=23, minute=59, second=59)
else:
work_limits.append((end_dt, end_dt.replace(
hour=23, minute=59, second=59)))
assert start_dt.date() == end_dt.date(), \
'get_working_intervals_of_day is restricted to one day'
intervals = []
work_dt = start_dt.replace(hour=0, minute=0, second=0)
# no calendar: try to use the default_interval, then return directly
if not self.ids:
working_interval = []
if default_interval:
working_interval = (
start_dt.replace(hour=default_interval[0],
minute=0, second=0),
start_dt.replace(hour=default_interval[1],
minute=0, second=0))
# intervals = self._interval_remove_leaves(working_interval, work_limits)
date_from = start_dt.replace(hour=default_interval[0],
minute=0, second=0).replace(tzinfo=pytz.UTC)
date_to = start_dt.replace(hour=default_interval[1],
minute=0, second=0).replace(tzinfo=pytz.UTC)
intervals += self._leave_intervals(date_from, date_to)
return intervals
#working_intervals = []
for calendar_working_day in self.get_attendances_for_weekdays(
[start_dt.weekday()], start_dt,
end_dt):
str_time_from_dict = str(calendar_working_day.hour_from).split('.')
hour_from = int(str_time_from_dict[0])
if int(str_time_from_dict[1]) < 10:
minutes_from = int(60 * int(str_time_from_dict[1]) / 10)
elif int(str_time_from_dict[1]) > 100:
m = str_time_from_dict[1][:2] + '.' + str_time_from_dict[1][2:]
m = float(m)
minutes_from = round(60 * m / 100)
else:
minutes_from = int(60 * int(str_time_from_dict[1]) / 100)
str_time_to_dict = str(calendar_working_day.hour_to).split('.')
hour_to = int(str_time_to_dict[0])
if int(str_time_to_dict[1]) < 10:
minutes_to = int(60 * int(str_time_to_dict[1]) / 10)
elif int(str_time_to_dict[1]) > 100:
m = str_time_to_dict[1][:2] + '.' + str_time_to_dict[1][2:]
m = float(m)
minutes_to = round(60 * m / 100)
else:
minutes_to = int(60 * int(str_time_to_dict[1]) / 100)
working_interval = (
work_dt.replace(hour=hour_from).replace(minute=minutes_from),
work_dt.replace(hour=hour_to).replace(minute=minutes_to)
)
# working_intervals += self._interval_remove_leaves(working_interval, work_limits)
intervals.append(working_interval)
date_from = work_dt.replace(hour=hour_from).replace(minute=minutes_from).replace(tzinfo=pytz.UTC)
date_to = work_dt.replace(hour=hour_to).replace(minute=minutes_to).replace(tzinfo=pytz.UTC)
# working_intervals += self._leave_intervals(date_from, date_to)
# find leave intervals
if leaves is None and compute_leaves:
leaves = self._get_leave_intervals(resource_id=resource_id)
# filter according to leaves
# for interval in working_intervals:
# if not leaves:
# leaves = []
# work_intervals = self._interval_remove_leaves(interval, leaves)
# intervals += work_intervals
return intervals
@api.multi
def get_working_hours_of_date(self, start_dt=None,
end_dt=None, leaves=None,
compute_leaves=None, resource_id=None,
default_interval=None):
""" Get the working hours of the day based on calendar. This method uses
get_working_intervals_of_day to have the work intervals of the day. It
then calculates the number of hours contained in those intervals. """
res = dtime.timedelta()
intervals = self.get_working_intervals_of_day(
start_dt, end_dt, leaves,
compute_leaves, resource_id,
default_interval)
for interval in intervals:
res += interval[1] - interval[0]
return seconds(res) / 3600.0
@api.multi
def get_bonus_hours_of_date(self, start_dt=None,
end_dt=None, leaves=None,
compute_leaves=False, resource_id=None,
default_interval=None):
""" Get the working hours of the day based on calendar. This method uses
get_working_intervals_of_day to have the work intervals of the day. It
then calculates the number of hours contained in those intervals. """
res = dtime.timedelta()
intervals = self.get_working_intervals_of_day(
start_dt, end_dt, leaves,
compute_leaves, resource_id,
default_interval)
for interval in intervals:
res += interval[1] - interval[0]
return seconds(res) / 3600.0
@api.multi
def get_attendances_for_weekdays(self, weekdays, start_dt, end_dt):
""" Given a list of weekdays, return matching
resource.calendar.attendance"""
res = []
for att in self.attendance_ids:
if int(att.dayofweek) in weekdays:
if not att.date_from or not att.date_to:
res.append(att)
else:
date_from = datetime.strptime(att.date_from, '%Y-%m-%d')
date_to = datetime.strptime(att.date_to, '%Y-%m-%d')
if date_from <= start_dt <= date_to:
res.append(att)
return res
use_overtime = fields.Boolean(string="Use Overtime Setting")
min_overtime_count = fields.Integer(string="Minimum overtime days",
default=0,
required=True)
count = fields.Integer(string="Percent Count",
default=0,
required=True)
overtime_attendance_ids = fields.One2many(
'resource.calendar.attendance.overtime',
'overtime_calendar_id',
string='Overtime')
two_days_shift = fields.Boolean(string='Shift between two days',
default=True,
help='Use for night shift between '
'two days.')
@api.constrains('min_overtime_count')
def _check_min_overtime_count(self):
"""Ensure that field min_overtime_count is >= 0"""
if self.min_overtime_count < 0:
raise ValidationError("Minimum overtime days must be positive.")
@api.constrains('two_days_shift')
def _check_two_days_shift(self):
if self.two_days_shift is False:
for attendance_id in self.overtime_attendance_ids:
if attendance_id.hour_to <= attendance_id.hour_from:
raise ValidationError("Overtime to must be greater than "
"overtime from when two days "
"shift is not using.")
@api.multi
def _get_leave_intervals(self, resource_id=None,
start_datetime=None, end_datetime=None):
self.ensure_one()
if resource_id:
domain = ['|',
('resource_id', '=', resource_id),
('resource_id', '=', False)]
else:
domain = [('resource_id', '=', False)]
if start_datetime:
domain += [('date_to', '>', fields.Datetime.to_string(
start_datetime + timedelta(days=-1)))]
if end_datetime:
domain += [('date_from', '<',
fields.Datetime.to_string(start_datetime +
timedelta(days=1)))]
leaves = self.env['resource.calendar.leaves'].search(
domain + [('calendar_id', '=', self.id)])
filtered_leaves = self.env['resource.calendar.leaves']
for leave in leaves:
if not leave.tz:
if self.env.context.get('tz'):
leave.tz = self.env.context.get('tz')
else:
leave.tz = 'UTC'
if start_datetime:
leave_date_to = to_tz(
fields.Datetime.from_string(leave.date_to), leave.tz)
if not leave_date_to >= start_datetime:
continue
if end_datetime:
leave_date_from = to_tz(
fields.Datetime.from_string(leave.date_from), leave.tz)
if not leave_date_from <= end_datetime:
continue
filtered_leaves += leave
return [self._interval_new(
to_tz(fields.Datetime.from_string(leave.date_from), leave.tz),
to_tz(fields.Datetime.from_string(leave.date_to), leave.tz),
{'leaves': leave}) for leave in filtered_leaves]
@api.multi
def initial_overtime(self):
contracts = self.env['hr.contract'].search(
[('resource_calendar_id', '=', self.id)])
employee_ids = [contract.employee_id.id for contract in contracts]
for employee in self.env['hr.employee'].browse(set(employee_ids)):
employee.initial_overtime()
class ResourceCalendarAttendanceOvertime(models.Model):
_name = "resource.calendar.attendance.overtime"
_order = 'dayofweek, hour_from'
_description = 'ResourceCalendarAttendanceOvertime'
name = fields.Char(required=True)
dayofweek = fields.Selection([('0', 'Monday'),
('1', 'Tuesday'),
('2', 'Wednesday'),
('3', 'Thursday'),
('4', 'Friday'),
('5', 'Saturday'),
('6', 'Sunday')
],
string='Day of Week',
required=True,
index=True,
default='0')
date_from = fields.Date(string='Starting Date')
date_to = fields.Date(string='End Date')
hour_from = fields.Float(string='Overtime from',
required=True,
index=True,
help="Start and End time of Overtime.")
hour_to = fields.Float(string='Overtime to',
required=True)
overtime_calendar_id = fields.Many2one("resource.calendar",
string="Resource's Calendar",
required=True,
ondelete='cascade')
def seconds(td):
assert isinstance(td, dtime.timedelta)
return (td.microseconds + (
td.seconds + td.days * 24 * 3600) * 10 ** 6) / 10. ** 6
def to_tz(datetime, tz_name):
tz = pytz.timezone(tz_name)
return pytz.UTC.localize(datetime.replace(tzinfo=None),
is_dst=False).astimezone(tz).replace(tzinfo=None)
class ResourceCalendarAttendance(models.Model):
_inherit = "resource.calendar.attendance"
@api.multi
def write(self, values):
if 'date_from' in values.keys() or 'date_to' in values.keys():
old_date_from = self.date_from
old_date_to = self.date_to
new_date_from = values.get('date_from') or self.date_from
new_date_to = values.get('date_to') or self.date_to
start_calc = None
if not old_date_from or not new_date_from:
start_calc = (datetime.now().date().replace(
month=1, day=1)).strftime("%Y-%m-%d")
end_calc = None
if not old_date_to or not new_date_to:
end_calc = (datetime.now().date().replace(
month=12, day=31)).strftime("%Y-%m-%d")
res = super(ResourceCalendarAttendance, self).write(values)
list_of_dates = filter(None, [new_date_from, new_date_to,
old_date_from, old_date_to,
end_calc, start_calc])
list_of_dates = [datetime.strptime(date, "%Y-%m-%d") for date in
list_of_dates]
date_end = max(list_of_dates)
date_start = min(list_of_dates)
self.change_working_time(date_start, date_end)
return res
else:
return super(ResourceCalendarAttendance, self).write(values)
@api.model
def create(self, values):
date_start = values.get('date_from') or (
datetime.now().date().replace(month=1, day=1)).strftime("%Y-%m-%d")
date_end = values.get('date_to') or (
datetime.now().date().replace(month=12, day=31)).strftime(
"%Y-%m-%d")
res = super(ResourceCalendarAttendance, self).create(values)
res.change_working_time(date_start, date_end)
return res
@api.multi
def unlink(self):
date_start = self.date_from or (
datetime.now().date().replace(month=1, day=1)).strftime("%Y-%m-%d")
date_end = self.date_to or (
datetime.now().date().replace(month=12, day=31)).strftime(
"%Y-%m-%d")
resource_calendar_id = self.calendar_id.id
res = super(ResourceCalendarAttendance, self).unlink()
self.change_working_time(date_start, date_end, resource_calendar_id)
return res
@api.multi
def change_working_time(self, date_start, date_end,
resource_calendar_id=False):
_logger.info(date_start)
_logger.info(date_end)
analytic_pool = self.env['employee.attendance.analytic']
if not resource_calendar_id:
resource_calendar_id = self.calendar_id.id
contract_ids = self.env['hr.contract'].search(
[('state', '=', 'open'),
('resource_calendar_id', '=', resource_calendar_id)]).ids
lines = analytic_pool.search(
[('contract_id', 'in', contract_ids),
('attendance_date', '<=', date_end),
('attendance_date', '>=', date_start)])
_logger.info(len(lines))
for line in lines:
analytic_pool.recalculate_line(line.name)
|
Evanston, IL. The Chicago Force (5-0) turned in another dominant performance on the football field at Evanston High School with a 54-0 win over the West Michigan Mayhem (3-2) last Saturday night.
Chicago scored on its first two series as Smith scored on a six-yard run and Gray caught a 30-yard pass from Grisafe. The Force jumped out to a 38-0 lead in the first half as Jeanette Gray, named offensive player of the game, caught three touchdown passes while the Mayhem was held to 41 yards.
Gray had one of the best efforts of her career with five catches for 181 yards. “Last year, she was one of the leading receivers in the WFA. Jeanette is a special talent.” Gray came to the Force after playing high school and college basketball in Valparaiso. Gray has reaped the benefits of playing in Chicago’s fast break no huddle offense led by Grisafe.
“You know we are going to throw the football,” Gray said after the game. “You know you are going to get a chance to make some plays. You have a good quarterback that is going to put the ball where it needs to be. It’s a fun offense.” Chicago has scored 251 points this season behind the 1-2 punch of Grisafe and Smith. Gray still believes there is room for improvement. “We still have a lot of work to do,” Gray admitted of the offense.
The defense had a hand in this winning outcome. Chicago put the clamps on Mayhem running back Telly Robinson and quarterback Nicole Beier. Robinson had 34 yards rushing on 17 carries while Beier had just nine completed passes in 30 attempts. The Mayhem was averaging 32.5 ppg on offense.
Smith tallied twice in the second half to put the finishing touches on the win. Chicago will face the Cleveland Fusion this Saturday night in Cleveland.
|
import time
import BaseHTTPServer
import re
from datetime import datetime
HOST_NAME = '10.33.43.6' # !!!REMEMBER TO CHANGE THIS!!!
PORT_NUMBER = 8080 # Maybe set this to 9000.
class Event:
time = "null"
pid = 0
delay = 5
class MyHandler(BaseHTTPServer.BaseHTTPRequestHandler):
def do_HEAD(s):
s.send_response(200)
s.send_header("Content-type", "text/html")
s.end_headers()
def do_GET(s):
"""Respond to a GET request."""
s.send_response(200)
s.send_header("Content-type", "text/html")
s.end_headers()
#Process transactions.txt
transactions_list = []
status = 0
e = Event()
for line in open("transactions.txt"):
line = line.strip()
if (status == 0):
e.pid = re.split("(\S+)-(\d+)-(\S+)", line)[2]
status = 1
continue
if (status == 1):
e.time = datetime.strptime(line[1:26], "%Y-%m-%d %H:%M:%S.%f")
status = 2
continue
if (status == 2):
if (line[0:11] != "Transaction"):
continue
if "." in line[17:]:
e.delay = datetime.strptime(line[17:], "%H:%M:%S.%f")
else:
e.delay = datetime.strptime(line[17:], "%H:%M:%S")
transactions_list.append(e)
e = Event()
status = 0
epoch = datetime(1900, 1, 1, 0, 0, 0)
pid_list = ["Time", ]
timeline = []
timeline_list = []
sorted_transactions_list = sorted(transactions_list, key=lambda x: time.mktime(x.time.timetuple()))
for event in sorted_transactions_list:
if event.pid not in pid_list:
pid_list.append(event.pid)
for event in sorted_transactions_list:
timeline = [0]*(len(pid_list))
timeline[0] = str(event.time)
delta = event.delay - epoch
timeline[pid_list.index(event.pid)] = delta.total_seconds() * 1000.0
timeline_list.append(timeline)
for content in open("main.html"):
if (content.strip() == "HEAD"):
s.wfile.write(str(pid_list)+",")
elif (content.strip() == "BODY"):
s.wfile.write(str(timeline_list)[1:-1])
else:
s.wfile.write(content)
if __name__ == '__main__':
server_class = BaseHTTPServer.HTTPServer
httpd = server_class((HOST_NAME, PORT_NUMBER), MyHandler)
print time.asctime(), "Server Starts - %s:%s" % (HOST_NAME, PORT_NUMBER)
try:
httpd.serve_forever()
except KeyboardInterrupt:
pass
httpd.server_close()
print time.asctime(), "Server Stops - %s:%s" % (HOST_NAME, PORT_NUMBER)
|
Carlson's Motor Sales provides Transfer Case Service services to Concord, NH, Bow, NH, Hookset, NH, and other surrounding areas.
Why Should You Have Transfer Case Services Performed at Carlson's Motor Sales?
We proudly service the Transfer Case Service needs of customers in Concord, NH, Bow, NH, Hookset, NH, and surrounding areas.
|
from pandas import read_csv
import pastas as ps
ps.set_log_level("ERROR")
def test_create_project():
pr = ps.Project(name="test")
return pr
def test_project_add_oseries():
pr = test_create_project()
obs = read_csv("tests/data/obs.csv", index_col=0, parse_dates=True,
squeeze=True)
pr.add_oseries(obs, name="heads", metadata={"x": 0.0, "y": 0})
return pr
def test_project_add_stresses():
pr = test_project_add_oseries()
prec = read_csv("tests/data/rain.csv", index_col=0, parse_dates=True,
squeeze=True)
evap = read_csv("tests/data/evap.csv", index_col=0, parse_dates=True,
squeeze=True)
pr.add_stress(prec, name="prec", kind="prec", metadata={"x": 10, "y": 10})
pr.add_stress(evap, name="evap", kind="evap",
metadata={"x": -10, "y": -10})
return pr
def test_project_add_model():
pr = test_project_add_stresses()
pr.add_models(model_name_prefix="my_", model_name_suffix="_model")
return pr
def test_project_add_recharge():
pr = test_project_add_model()
pr.add_recharge()
return pr
def test_project_solve_models():
pr = test_project_add_recharge()
pr.solve_models()
return pr
def test_project_get_parameters():
pr = test_project_solve_models()
return pr.get_parameters(["recharge_A", "noise_alpha"])
def test_project_get_statistics():
pr = test_project_solve_models()
return pr.get_statistics(["evp", "aic"])
def test_project_del_model():
pr = test_project_add_model()
pr.del_model("my_heads_model")
return pr
def test_project_del_oseries():
pr = test_project_add_oseries()
pr.del_oseries("heads")
return pr
def test_project_del_stress():
pr = test_project_add_stresses()
pr.del_stress("prec")
return pr
def test_project_get_distances():
pr = test_project_add_stresses()
return pr.get_distances()
def test_project_get_nearest_stresses():
pr = test_project_add_stresses()
pr.get_nearest_stresses(kind="prec", n=2)
def test_project_dump_to_file():
pr = test_project_solve_models()
pr.to_file("testproject.pas")
return
def test_project_load_from_file():
pr = ps.io.load("testproject.pas")
return pr
def test_project_get_oseries_metadata():
pr = test_project_add_oseries()
return pr.get_oseries_metadata(["heads"], ["x", "y"])
def test_project_get_oseries_settings():
pr = test_project_add_oseries()
return pr.get_oseries_settings(["heads"], ["tmin", "tmax", "freq"])
def test_project_get_metadata():
pr = test_project_add_stresses()
return pr.get_metadata()
def test_project_get_file_info():
pr = test_project_add_oseries()
return pr.get_file_info()
def test_project_update_model_series():
pr = test_project_solve_models()
pr.update_model_series()
return
|
This Lycra tank is the ultimate must-have for any type of workout. Available in a variety of colors, you can mix and match with any Otomix pant. Great built in bra with elastic band for additional support. Straps meet in the back so they never fall off. Made of Supplex Lycra.
|
from setuptools import setup, find_packages
import os
import cms
CLASSIFIERS = [
'Development Status :: 5 - Production/Stable',
'Environment :: Web Environment',
'Framework :: Django',
'Intended Audience :: Developers',
'License :: OSI Approved :: BSD License',
'Operating System :: OS Independent',
'Programming Language :: Python',
'Topic :: Internet :: WWW/HTTP :: Dynamic Content',
'Topic :: Software Development',
'Topic :: Software Development :: Libraries :: Application Frameworks',
"Programming Language :: Python :: 2.6",
"Programming Language :: Python :: 2.7",
"Programming Language :: Python :: 3.3",
"Programming Language :: Python :: 3.4",
]
setup(
author="Patrick Lauber",
author_email="digi@treepy.com",
name='django-cms',
version=cms.__version__,
description='An Advanced Django CMS',
long_description=open(os.path.join(os.path.dirname(__file__), 'README.rst')).read(),
url='https://www.django-cms.org/',
license='BSD License',
platforms=['OS Independent'],
classifiers=CLASSIFIERS,
install_requires=[
'Django>=1.6,<1.8',
'django-classy-tags>=0.5',
'html5lib<0.9999',
'django-treebeard==3.0',
'django-sekizai>=0.7',
'djangocms-admin-style'
],
extras_require={
'south': ['south>=1.0.0'],
},
packages=find_packages(exclude=["project", "project.*"]),
include_package_data=True,
zip_safe=False,
test_suite='runtests.main',
)
|
Last fall I went to auction house sale of vintage designer clothes. I found an iconic Yves Saint Laurent Rive Gauche safari tunic, Saharienne in royal blue. It was one size too small but I was determined to get it because it was YSL and it seemed like there was enough room but when I tried it on, it would not come all the way down over my hips. Below is Saharienne in classic khaki.
This spring I found a YSL inspired linen top that I am wearing below. You don't find linen around too much these days because Americans don't like the wrinkles but there is nothing like it in the sweltering heat. Europeans are much more likely to be seen wearing it -- they embrace the wrinkles and find it luxurious.
I'm wearing the linen top with some white Citizens of Humanity jeans, a Clare V crossbody bag and my Gucci kerchief scarf that I've had forever. I call the things I have had in my wardrobe for a long time -- old friends. If you shop well, you will have them too.
My Aquazzura or any laced shoes look great with cropped pants or a long flowy dress.
|
from main import BaseHandler
from models.user import store_blog_user
from base.helpers import validate
from base.helpers import make_secure_val
import time
class SignupHandler(BaseHandler):
"""
Handles form validation, checks if the username and/or email id already exists.
In case form is submitted with proper inputs it redirects user to the welcome
page and adds user to the db
"""
def get(self):
self.render("signup.html")
def post(self):
# obtain input values from the form
input_username = self.request.get('username')
input_password = self.request.get('password')
input_verify = self.request.get('verify')
input_email = self.request.get('email')
validate_response = validate(input_username,
input_password,
input_verify,
input_email)
# if validate_response dictionary is empty, the user input values are valid
# (except that the username/email may already be taken which need to be tested)
if validate_response:
username_error = validate_response.get('username_error', "")
password_error = validate_response.get('password_error', "")
verify_error = validate_response.get('verify_error', "")
email_error = validate_response.get('email_error', "")
self.render("signup.html",
username_error=username_error,
password_error=password_error,
verify_error=verify_error,
email_error=email_error,
input_username=input_username,
input_email=input_email)
else:
store_user_response = store_blog_user(input_username,
input_password,
input_email)
if store_user_response:
self.render("signup.html",
store_user_error=store_user_response,
input_username=input_username,
input_email=input_email)
# user successfully stored in db
else:
user_cookie = make_secure_val(str(input_username))
self.response.headers.add_header(
"Set-Cookie", "user=%s; Path=/" %
user_cookie)
time.sleep(0.1)
self.redirect('/')
|
ES Replace LCD Touch Screen Display Digitizer for PS Vita .
ES Replace LCD Touch Screen Display Digitizer for PS Vita . ES Replace LCD Touch Screen Display Digitizer for PS Vita 1000 . Burkina Faso, Kyrgyzstan, Ghana .
Flash0 dump, then what? - Wololo - PS4, PS Vita .
A few weeks ago, our forum member The Z twitted about a "leak" of a full dump of the Flash0 of the PSP emu on the PS Vita. Unless you've been in the homebrew.
Vodafone has revealed more details about its partnership with Sony for the PS Vita which sees the company as the exclusive 3G carrier for . all-screen Z5 phone next .
Why ShopTo £ Login . X Register . • PS Vita •• Video Games . •• Large Screen 10" And Above • Laptop Accessories •• Backpacks .
Buy PlayStation games online at Jumia Ghana. Discover a great selection of the latest PS4 games and accessories at the best prices. Order now and pay on delivery.
PS VITA SLIM 2ND GEN - Wi-Fi - 1GB Marvel vs Capcom .
PS VITA SLIM 2ND GEN - Wi-Fi - 1GB Marvel vs Capcom ⚫️ CAMERA,TOUCH SCREEN, INTERNET WORKS👌🏿 ⚫️ NO SCRATCHES ON THE SCREEN ⚫️ MARVEL VS CAPCOM 3 GAME If you want it swing by and take it .
Petition update · Whitelist hacked PlayStation TV alert .
Whitelist hacked PlayStation TV alert: BEWARE GAMES THAT . save-our-playstation-tv-and-vita-petition-for-100-vita . to display the computer screen on .
Shop durable PS Vita Consoles now on Jumia Market. Get yours now and play games with your friends all day long! Order online and pay on delivery!
List of PlayStation Vita games (A–L) Jump to navigation Jump to search. This is a list of games for the PlayStation Vita that are distributed through retail via .
Discover the latest Ps Vita from Sony Computer Entertainment on Jumia at the lowest prices in Ghana. Order now and pay cash on delivery.
Discover PSP consoles online at Jumia Nigeria . Sony Computer Entertainment Playstation Portable Model . Generic 8GB 4.3 Inch LCD Screen Mp4 Player Handheld .
Ps vita available for sale pls read the discription below .
PS Vita Handheld Console OLED Screen Firmware 3.60 .
PS Vita Handheld Console OLED Screen Firmware 3.60 HEnkaku with 16GB Memory Card | Video Games & Consoles, Video Game Consoles | eBay!
Ps Vita WiFi/3G 16 GB for sale in Freehold, NJ - 5miles .
Feb 05, 2014 · Sony PS Vita Slim review. Sony's latest is a sleek, stylish and reasonably priced handheld console that's designed for gamers on . PS Vita Slim: Size, build and screen.
PS Vita Games - Buy PS Vita Games Online | Jumia .
Konga Online shopping | Buy Phones,Fashion,Electronics .
Konga; Online Shopping . PS Vita. Nintendo 3DS. . Mixers. Drums & Percussion. Equalisers. Shop by TV Screen Size. 32 Inch TVs. 39 Inch TVs. 40 Inch TVs. 42 Inch .
Nintendo Switch : Le coût de fabrication dévoilé - La .
Aug 13, 2012 · It's the start of my walkthrough for Tomb Raider Legend, . I found the loading screen images that I use as . Tomb Raider Legend - Level 4 - Ghana .
PS Vita – PlayStation Vita Console | PS Vita Features .
Amazon: PlayStation Portable 3000 Core Pack .
Khanka All-in-one Double Compartment Hard Carry Travel Case Bag For Sony Psvita PS Vita 1000 and . and Brighter Screen The latest Playstation Portable .
How much time do you have to wait for PS vita Passcode .
Sony Ghana | Latest Technology & News | Electronics .
Where to buy Find your nearest Sony store to view our latest products. . PlayStation ® Entertainment . 1 Actual colour and dimension may differ from the screen .
PlayStation® Vita. . 1 Actual colour and dimension may differ from the screen image. . Ghana. For Professionals Company Info Contact Us.
Find Sony shop online at Jumia Kenya. Discover a great selection of the latest Sony products including Sony electronics, mobile phones, laptops, home theatre & .
Largest Online Shopping Store in Ghana - Zoobashop.
Shop at Ghana's largest online store and . PS Vita Games; Xbox . Zoobashop is Ghana's #1 Online Retail Store. Zoobashop is a wholly Ghanaian owned retail .
|
# coding=utf-8
# Copyright 2021 The TensorFlow Datasets Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Lazy imports for heavy dependencies."""
import functools
import importlib
from typing import TypeVar
from tensorflow_datasets.core.utils import py_utils as utils
_Fn = TypeVar("_Fn")
def _try_import(module_name):
"""Try importing a module, with an informative error message on failure."""
try:
mod = importlib.import_module(module_name)
return mod
except ImportError as e:
err_msg = ("Failed importing {name}. This likely means that the dataset "
"requires additional dependencies that have to be "
"manually installed (usually with `pip install {name}`). See "
"setup.py extras_require.").format(name=module_name)
utils.reraise(e, suffix=err_msg)
class LazyImporter(object):
"""Lazy importer for heavy dependencies.
Some datasets require heavy dependencies for data generation. To allow for
the default installation to remain lean, those heavy dependencies are
lazily imported here.
"""
@utils.classproperty
@classmethod
def apache_beam(cls):
return _try_import("apache_beam")
@utils.classproperty
@classmethod
def bs4(cls):
return _try_import("bs4")
@utils.classproperty
@classmethod
def crepe(cls):
return _try_import("crepe")
@utils.classproperty
@classmethod
def cv2(cls):
return _try_import("cv2")
@utils.classproperty
@classmethod
def gcld3(cls):
return _try_import("gcld3") # pylint: disable=unreachable
@utils.classproperty
@classmethod
def h5py(cls):
return _try_import("h5py")
@utils.classproperty
@classmethod
def langdetect(cls):
return _try_import("langdetect")
@utils.classproperty
@classmethod
def librosa(cls):
return _try_import("librosa")
@utils.classproperty
@classmethod
def lxml(cls):
return _try_import("lxml")
@utils.classproperty
@classmethod
def matplotlib(cls):
_try_import("matplotlib.pyplot")
return _try_import("matplotlib")
@utils.classproperty
@classmethod
def mwparserfromhell(cls):
return _try_import("mwparserfromhell")
@utils.classproperty
@classmethod
def networkx(cls):
return _try_import("networkx")
@utils.classproperty
@classmethod
def nltk(cls):
return _try_import("nltk")
@utils.classproperty
@classmethod
def pandas(cls):
return _try_import("pandas")
@utils.classproperty
@classmethod
def PIL_Image(cls): # pylint: disable=invalid-name
# TiffImagePlugin need to be activated explicitly on some systems
# https://github.com/python-pillow/Pillow/blob/5.4.x/src/PIL/Image.py#L407
_try_import("PIL.TiffImagePlugin")
return _try_import("PIL.Image")
@utils.classproperty
@classmethod
def PIL_ImageDraw(cls): # pylint: disable=invalid-name
return _try_import("PIL.ImageDraw")
@utils.classproperty
@classmethod
def pretty_midi(cls):
return _try_import("pretty_midi")
@utils.classproperty
@classmethod
def pycocotools(cls):
return _try_import("pycocotools.mask")
@utils.classproperty
@classmethod
def pydub(cls):
return _try_import("pydub")
@utils.classproperty
@classmethod
def scipy(cls):
_try_import("scipy.io")
_try_import("scipy.io.wavfile")
_try_import("scipy.ndimage")
return _try_import("scipy")
@utils.classproperty
@classmethod
def skimage(cls):
_try_import("skimage.color")
_try_import("skimage.filters")
try:
_try_import("skimage.external.tifffile")
except ImportError:
pass
return _try_import("skimage")
@utils.classproperty
@classmethod
def tifffile(cls):
return _try_import("tifffile")
@utils.classproperty
@classmethod
def tensorflow_data_validation(cls):
return _try_import("tensorflow_data_validation")
@utils.classproperty
@classmethod
def tensorflow_io(cls):
return _try_import("tensorflow_io")
@utils.classproperty
@classmethod
def tldextract(cls):
return _try_import("tldextract")
@utils.classproperty
@classmethod
def os(cls):
"""For testing purposes only."""
return _try_import("os")
@utils.classproperty
@classmethod
def test_foo(cls):
"""For testing purposes only."""
return _try_import("test_foo")
lazy_imports = LazyImporter # pylint: disable=invalid-name
def beam_ptransform_fn(fn: _Fn) -> _Fn:
"""Lazy version of `@beam.ptransform_fn`."""
lazy_decorated_fn = None
@functools.wraps(fn)
def decorated(*args, **kwargs):
nonlocal lazy_decorated_fn
# Actually decorate the function only the first time it is called
if lazy_decorated_fn is None:
lazy_decorated_fn = lazy_imports.apache_beam.ptransform_fn(fn)
return lazy_decorated_fn(*args, **kwargs)
return decorated
|
Streamline Surveys Inc. invites you to join the Paid Surveys Consumer Panel Canada - English for free, and gives you an opportunity to help shape the products and services of today and tomorrow. Share your opinions through simple online surveys for money. Online Surveys take a few minutes to fill out and you'll receive cash rewards for your participation! All cash participant awards will be paid by Cint.com. An Online Surveys APP is available for download.
|
from registrator.models import StudentRecord
from user_stuff.models import StarUser, Student
from bs4 import BeautifulSoup, Tag
from cookielib import CookieJar
import urllib
import urllib2
import urlparse
from StringIO import StringIO
import gzip
import re
from uni_info.models import Semester, Section, Course
from registrator.models import StudentRecord, StudentRecordEntry
class MyConcordiaBackend(object):
def __init__(self):
pass
@staticmethod
def authenticate(username=None, password=None, session=None):
reg = MyConcordiaAccessor()
if not reg.login(username, password):
return None
#implicit else:
student = reg.get_user_status()
if student:
try:
user = Student.objects.get(username=username)
except Student.DoesNotExist:
user = Student(username=username,
password='get from myconcordiaacc',
date_of_birth='1970-01-01')
user.save()
stud = StudentRecord.objects.get_or_create(
student=user,
_standing='Good'
)
stud_rec = reg.get_student_record()
stud_info = stud_rec.student_info
user.date_of_birth = stud_info['date_of_birth']
user.first_name = stud_info['first_name']
user.last_name = stud_info['last_name']
user.student_identifier = stud_info['id']
user.gender = stud_info['gender']
user.save()
else:
try:
user = StarUser.objects.get(username=username)
except StarUser.DoesNotExist:
user = StarUser(username=username, password='get from myconcordiaacc',
date_of_birth='1970-01-01')
user.save()
session['reg'] = reg
return user
@staticmethod
def get_user(user_id):
try:
return StarUser.objects.get(pk=user_id)
except StarUser.DoesNotExist:
return None
SEMESTER_MAPPER = {
'/2': Semester.FALL,
'/3': Semester.YEAR_LONG,
'/4': Semester.WINTER,
'/1': Semester.SUMMER_1
}
SEMESTER_REVERSE_MAPPER = {
Semester.FALL: '2',
Semester.YEAR_LONG: '3',
Semester.WINTER: '4'
}
class MyConcordiaAccessor():
LOGIN_URL = 'https://my.concordia.ca/psp/upprpr9/?cmd=login&languageCd=ENG'
STUDENT_RECORD_LINK_TEXT = 'Display the student record.'
STUDENT_RECORD_URL = 'https://genesis.concordia.ca/Transcript/PortalStudentRecord.aspx?token=%s'
REGISTRATION_URL = 'https://regsis.concordia.ca/portalRegora/undergraduate/wr150.asp?token=%s'
REGISTER_FOR_COURSE_URL = 'https://regsis.concordia.ca/portalRegora/undergraduate/wr225.asp'
CONFIRM_REGISTER_FOR_COURSE_URL = 'https://regsis.concordia.ca/portalRegora/undergraduate/wr300.asp'
CHANGE_SECTION_URL = 'https://regsis.concordia.ca/portalRegora/undergraduate/wr500.asp'
ACADEMIC_LINK = 'Academic'
REGISTRATION_NETLOC = 'regsis.concordia.ca'
LOGIN_FAILURE_URL = 'https://my.concordia.ca/psp/portprod/?cmd=login&languageCd=ENG'
DEFAULT_HEADERS = [
('Host', 'my.concordia.ca'),
('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'),
('Accept-Encoding', 'gzip,deflate,sdch'),
('Accept-Language', 'en-GB,en;q=0.8,fr;q=0.6,en-US;q=0.4,fr-CA;q=0.2'),
('Origin', 'https://www.myconcordia.ca'),
('Referer', 'https://www.myconcordia.ca/'),
('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.57 Safari/537.36')
]
def __init__(self):
self.cj = CookieJar()
self.opener = urllib2.build_opener(urllib2.HTTPCookieProcessor(self.cj))
self.opener.addheaders = [
('Host', 'my.concordia.ca:443'),
('Accept', 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8'),
('Accept-Encoding', 'gzip,deflate,sdch'),
('Accept-Language', 'en-GB,en;q=0.8,fr;q=0.6,en-US;q=0.4,fr-CA;q=0.2'),
('Origin', 'https://www.myconcordia.ca'),
('Referer', 'https://www.myconcordia.ca/'),
('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/31.0.1650.57 Safari/537.36')
]
self.site_tree = None
self.student_record = None
self.external_token = None
self.registration_id = None
self.currently_at = None
self.current_bs = None
def set_headers(self):
if self.currently_at:
parse = urlparse.urlparse(self.currently_at)
origin = parse[0] + '://' + parse[1]
self.opener.addheaders[0] = ('Host', parse[1])
self.opener.addheaders[4] = ('Origin', origin)
self.opener.addheaders[5] = ('Referer', self.currently_at)
def login(self, userid, password):
payload_data = {
'resource': '/content/cspace/en/login.html',
'_charset_': 'UTF-8',
'userid': userid,
'pwd': password
}
content = self.get_url(self.LOGIN_URL, payload_data)
if 'PS_TOKEN' in self.cj._cookies['.concordia.ca']['/']:
# we authenticated properly
self.site_tree = BeautifulSoup(content)
return True
else:
# we didn't authenticate properly, so we have to reset the object
self.__init__()
return False
def get_url(self, url, payload_data=None):
# Todo: make this more robust?
# Todo: also add an error condition for if an absolute URL is never supplied
if url[:4] != 'http':
url = urlparse.urljoin(self.currently_at, url)
self.set_headers()
if payload_data:
data = urllib.urlencode(payload_data)
response = self.opener.open(url, data)
else:
response = self.opener.open(url)
self.currently_at = response.geturl()
if response.info().get('Content-Encoding') == 'gzip':
return unzip(response.read())
else:
return response.read()
def set_token(self):
m = re.search(r'token=([a-zA-Z0-9]*)', self.currently_at)
if m.group(1):
self.external_token = m.group(1)
else:
raise Exception
def get_url_with_token(self, url, payload_data=None):
return self.get_url(url % self.external_token, payload_data)
def get_url_with_registration_id(self, url, payload_data={}):
payload_data['Id'] = self.registration_id
return self.get_url(url, payload_data)
def get_content(self, url, payload_data=None):
bs = BeautifulSoup(self.get_url(url, payload_data))
return BeautifulSoup(self.get_url(bs.find('frame', title='Main Content')['src']))
def navigate_content(self, url, payload_data=None):
return BeautifulSoup(self.get_url(url, payload_data))
def get_nav_link_by_name(self, link):
try:
return self.site_tree.find('a', attrs={'name': link})['href']
except TypeError:
return None
def get_nav_link_by_title(self, link):
try:
return self.site_tree.find('a', title=link)['href']
except TypeError:
return None
def get_content_link_by_title(self, content, link):
return content.find('a', title=link)['href']
def submit_form(self, content, form_id=None, extra_data=None, force_url=None):
payload = {}
if form_id:
form = content.find('form', id=form_id)
hiddens = form.find_all('input', type='hidden')
else:
hiddens = content.find_all('input', type='hidden')
for el in hiddens:
payload[el.attrs['name']] = el['value']
if extra_data:
for data in extra_data.iteritems():
payload[data[0]] = data[1]
if force_url:
url = force_url
else:
url = form['action']
return self.navigate_content(url, payload)
def in_section(self, section):
if section == 'registration':
return self.netloc == self.REGISTRATION_NETLOC
@property
def netloc(self):
return urlparse.urlparse(self.currently_at)[1]
def get_student_record(self):
return ConcordiaStudentRecord(self.get_url_with_token(self.STUDENT_RECORD_URL))
# link = self.ACADEMIC_LINK
# content = self.get_content(self.get_nav_link_by_title(link))
# link = self.STUDENT_RECORD_LINK_TEXT
# self.student_record = self.get_content(self.get_content_link_by_title(content, link))
def goto_registration(self):
link = 'Registration'
content = self.get_content(self.get_nav_link_by_title(link))
link = 'Undergraduate Registration'
content = self.get_content(self.get_content_link_by_title(content, link))
content = self.navigate_content(content.meta['content'][6:])
form_id = 'form2' # Continue
self.current_bs = self.submit_form(content, form_id)
def get_registration_id(self):
content = BeautifulSoup(self.get_url(self.REGISTRATION_URL))
self.registration_id = content.find('input', type='hidden', attrs={'name': 'Id'})['value']
def register_for_course(self, section):
if not self.registration_id:
self.get_registration_id()
section_tree = section.section_tree_to_here
payload_data1 = {
'CourName': section.course.course_letters,
'CourNo': section.course.course_numbers,
'Sess': SEMESTER_REVERSE_MAPPER[section.semester_year.period],
'CatNum': '12345',
'MainSec': section_tree[0],
'RelSec1': '',
'RelSec2': '',
}
payload_data2 = {
'cournam': section.course.course_letters,
'courno': section.course.course_numbers,
'acyear': section.semester_year.year,
'session': SEMESTER_REVERSE_MAPPER[section.semester_year.period],
'mainsec': section_tree[0],
'relsec1': '',
'relsec2': '',
'subses': '',
'catalog': '12345',
}
if 1 in section_tree:
payload_data1['RelSec1'] = section_tree[1]
payload_data2['relsec1'] = section_tree[1]
if 2 in section_tree:
payload_data2['RelSec2'] = section_tree[2]
payload_data2['relsec2'] = section_tree[2]
# TODO: add check for confirmation?
response = self.get_url_with_registration_id(self.REGISTER_FOR_COURSE_URL, payload_data1)
response = self.get_url_with_registration_id(self.CONFIRM_REGISTER_FOR_COURSE_URL, payload_data2)
def change_section(self, from_section, to_section):
from_section_tree = from_section.section_tree_to_here
to_section_tree = to_section.section_tree_to_here
payload_data = {
'cournam': to_section.course.course_letters,
'courno': to_section.course.course_numbers,
'acyear': to_section.semester_year.year,
'toSession': SEMESTER_REVERSE_MAPPER[to_section.semester_year.period],
'mainsec': to_section_tree[0],
'relsec1': '',
'relsec2': '',
'subses': '',
'catalog': '12345',
'fmainsec': from_section_tree[0],
'frelsec1': '',
'frelsec2': '',
'fSession': SEMESTER_REVERSE_MAPPER[from_section.semester_year.period],
'fsubses': '',
}
if 1 in from_section_tree:
payload_data['frelsec1'] = from_section_tree[1]
payload_data['relsec1'] = to_section_tree[1]
if 2 in from_section_tree:
payload_data['frelsec2'] = from_section_tree[2]
payload_data['relsec2'] = to_section_tree[2]
response = self.get_url_with_registration_id(self.CHANGE_SECTION_URL, payload_data)
def add_course(self, sec_info):
if not self.registration_id:
self.get_registration_id()
payload_data = {
'CourName': sec_info['']
}
def get_user_status(self):
# status_link = self.get_nav_link_by_name('CU_ADDISPLAY')
# content = self.get_content(status_link)
l = urlparse.urljoin(self.currently_at.replace('psp', 'psc'), 'WEBLIB_CONCORD.CU_PUBLIC_INFO.FieldFormula.IScript_ADDisplay')
content = BeautifulSoup(self.get_url('https://my.concordia.ca/psc/upprpr9/EMPLOYEE/EMPL/s/WEBLIB_CONCORD.CU_PUBLIC_INFO.FieldFormula.IScript_ADDisplay'))
self.set_token()
stud_access = content.find('li').get_text()
student = False
m = re.match('Undergraduate/Graduate Student Access enabled', stud_access)
if m:
student = True
return student
def parse_registration_response(self, response):
soup = BeautifulSoup(response)
status = soup.find('table', class_='MAIN').find('td', bgcolor='#000080').font.b.string.strip()
if status == 'Course Registration Denied':
table = soup.find('table', border=0)
return False, table
class ConcordiaStudentRecord(object):
STUDENT_RECORD_REGEX = r'([A-Z]{4}) (\d{3}[A-Z]?) (\/\d) ([A-Z0-9]*) (.*) (\d\.\d\d) ([A-Z]*[-+]?)? ([A-Z]*) \(?(\d\.\d)?\)? (\d.\d\d)? (\d*) (\d\.\d\d)? (.*)'
def __init__(self, html):
self.html = html
self.soup = BeautifulSoup(html)
self.student = None
self.student_info = None
self.student_record = None
self.main_div = self.soup.find('div', id='SIMSPrintSection')
self.student_info_soup = self.main_div.table.tr.nextSibling
self.parse_student_info()
degree_req_bs = self.student_info_soup.nextSibling
self.degree_reqs = degree_req_bs.string
exemption_bs = degree_req_bs.nextSibling
self.exemptions = exemption_bs.string
table_of_takens_bs = exemption_bs.nextSibling.table
current_row = table_of_takens_bs.tr
while current_row:
current_text = current_row.get_text()
if 'ACADEMIC YEAR' in current_text:
# TODO: replace with regex
current_year = current_text[14:18]
elif 'SUMMER' in current_text or 'FALL-WINTER' in current_text:
# we can ignore it
pass
#self.current_semester = Semester.objects.get(year=current_year, period=Semester.SUMMER_1)
elif 'Grade/Notation/GPA' in current_text:
pass
else:
current_text = gen_string_from_current_row(current_row)
m = re.match(self.STUDENT_RECORD_REGEX, current_text)
if m:
course_letters = m.group(1)
course_numbers = m.group(2)
semester = m.group(3)
sem = Semester.objects.get(year=current_year, period=SEMESTER_MAPPER[semester])
sec_name = m.group(4)
course_name = m.group(5)
course_credits = m.group(6)
grade_received = m.group(7)
notation = m.group(8)
gpa_received = m.group(9)
class_avg = m.group(10)
class_size = m.group(11)
credits_received = m.group(12)
other = m.group(13)
course = Course.objects.get(course_letters=course_letters, course_numbers=course_numbers)
try:
sec = Section.objects.get(course=course, semester_year=sem, name=sec_name)
print "%s %s Section %s found" % (course_letters, course_numbers, sec_name)
except Section.DoesNotExist:
sec = Section(course=course, semester_year=sem, name=sec_name, sec_type=Section.LECTURE,
days='')
print "%s %s Section %s not found" % (course_letters, course_numbers, sec_name)
sec.save()
try:
rec_ent = StudentRecordEntry.objects.get(student_record=self.student_record, section=sec)
print "SRE %s %s Section %s found" % (course_letters, course_numbers, sec_name)
except StudentRecordEntry.DoesNotExist:
rec_ent = StudentRecordEntry(student_record=self.student_record, section=sec)
print "SRE %s %s Section %s not found" % (course_letters, course_numbers, sec_name)
if int(current_year) <= get_current_school_year() and gpa_received is not None:
rec_ent.state = StudentRecordEntry.COMPLETED
rec_ent.result_grade = gpa_received
else:
rec_ent.state = StudentRecordEntry.REGISTERED
rec_ent.save()
# print gen_string_from_current_row(current_row)
current_row = current_row.nextSibling
#return table_blah
def parse_student_info(self):
id_row = self.student_info_soup.table.tr
id_num = id_row.td.b.get_text()
name_row = id_row.nextSibling
first_name, last_name = name_row.td.get_text().split(u'\xa0')
next_row = name_row.nextSibling
next_row = next_row.nextSibling
date_of_birth_text, sex = next_row.td.nextSibling.get_text().split(u'\xa0')
import datetime
dob = datetime.datetime.strptime(date_of_birth_text, '%d/%m/%y').date()
self.student_info = {
'id': id_num,
'first_name': first_name,
'last_name': last_name,
'date_of_birth': dob,
'gender': sex.strip(),
}
self.student = Student.objects.get(student_identifier=id_num)
self.student_record = StudentRecord.objects.get(student=self.student)
def unzip(gzipped_data):
buf = StringIO(gzipped_data)
unzipped = gzip.GzipFile(fileobj=buf)
return unzipped.read()
def gen_string_from_current_row(row):
b = []
for a in row.contents:
if type(a) is Tag:
# this sub is dangerous.... keep an eye out.
to_append = re.sub('[ ]{2,10}', ' ', a.get_text().strip())
b.append(to_append)
#keeping this section just in case it's needed for other faculties
#else:
# if a.strip() != u'':
# c = 'durr %s' % a.strip().replace(' ', ' ')
# b.append(c)
return ' '.join(b)
def get_current_school_year():
from datetime import date
now = date.today()
if now.month < 5: # previous actual year = school year
return now.year - 1
# else return current year
return now.year
|
On episode 132 of the BSP I talk about YouTube paying creators tens or hundreds of thousands of dollars to post content to their platform first as well as promote their monetization tools, Twitter putting Alex Jones in time out for a week based on a periscope livestream, and Facebook being pressured by the courts to decrypt end to end encrypted messages to assist in a probe into MS-13 activities.
12:42 - What do you think the TSA can do to improve Air Travel Security?
19:14 - How do I address topics clearly on my podcast?
21:34 - How long should my podcast be?
25:00 - C01u Pro vs BM800 & UMC202HD?
|
import abc
class Stream(object):
"""
Abstract implementation of a data stream.
"""
__metaclass__ = abc.ABCMeta
def for_each(self, operation, limit=0, verbose=False):
"""
Applies the given Operation to each item in the stream. The Operation executes on the
items in the stream in the order that they appear in the stream.
If the limit is supplied, then processing of the stream will stop after that many items
have been processed.
"""
if limit != 0:
count = 0
while self.has_next():
operation.perform(self.next())
count += 1
if verbose:
print count
if count >= limit:
break
else:
while self.has_next():
operation.perform(self.next())
def filter(self, predicate):
"""
Transforms the stream by only keeping items that match the supplied predicate.
"""
return FilterStream(self, predicate)
def map(self, function):
"""
Transforms the stream by applying the supplied function to each item in the stream,
thus creating a new stream.
"""
return MapStream(self, function)
@abc.abstractmethod
def has_next(self):
"""
Tests to see if there are any items left in the stream to consume.
"""
pass
@abc.abstractmethod
def next(self):
"""
Fetches the next item in the stream.
"""
pass
class FilterStream(Stream):
"""
A stream created by applying a filter (in the form of a Predicate) to another stream.
"""
def __init__(self, source, predicate):
self.source = source
self.predicate = predicate
self.obj = None
def has_next(self):
if self.obj is not None:
return True
while self.source.has_next() and self.obj is None:
self.obj = self.source.next()
if not self.predicate.test(self.obj):
self.obj = None
return self.obj is not None
def next(self):
if not self.has_next():
raise Exception("Iteration has no more elements")
to_return = self.obj
self.obj = None
return to_return
class MapStream(Stream):
"""
A stream created by applying a Function to the elements in another stream.
"""
def __init__(self, source, function):
self.source = source
self.function = function
def has_next(self):
return self.source.has_next()
def next(self):
return self.function.apply(self.source.next())
class BufferedStream(Stream):
"""
Implementation of a Stream that uses a BufferedQueue as its internal buffer.
This class is designed for use with live data sources that may produce data faster than it
can be consumed, as the internal BufferedQueue will drop items that aren't consumed (i.e,
removed from the queue) fast enough.
"""
def __init__(self, buf):
self.buf = buf
self.connected = False
def register(self, item):
"""
Attempts to 'register' an item with the BufferedStream by offering it to the
BufferedQueue. Returns True if the item was successfully published to the stream, or False
if it wasn't.
"""
return self.buf.offer(item)
def connect(self):
"""
Opens the streaming connection to the data source (makes has_next() return True)
"""
self.connected = True
def disconnect(self):
"""
Closes the stream (by making has_next() return False)
"""
self.connected = False
def has_next(self):
return self.connected
def next(self):
return self.buf.take()
|
Karachi vs. Kansas is a clash of personalities, stories and experiences brought together to harmoniously respond to pressures and happenings of the world around us.
Natasha (N): As 21st century Muslim women, I feel like we are constantly misunderstood, with both internal and external stereotypes pulling us in different directions. On the one hand, I have to put up with “bomb” or “goat” jokes, while on the other hand, I have my grandmother glaring at the way I “immodestly” dress. On top of this, I feel as though there is a severe lack of role models for us. I remember as a young history nerd in middle school, I developed an obsession for Abigail Adams. I felt like if I learned everything about her, I could somehow legitimize my presence in the United States. Like any young girl, I was in need of a role model.
Faryal (F): I honestly can’t think of one positive portrayal of a Muslim character where their Muslim identity was not revealed in the context of a malicious terrorist plot. Even in real life, it is still so hard to find positive Muslim role models.
N: After Malala came to prominence a few years back, it really hit me how few mainstream Muslim women role models we have.
N: I was almost shocked when I learned about powerful Muslim women like Zeenat Mahal and Razia Sultana. Don’t get me wrong, I did grow up surrounded by very strong and successful Muslim women, but I’ve never had someone who I was really excited about meeting. That’s why I was so excited when we were both able to meet Khizr Khan, the father of a deceased American Army captain who spoke at the Democratic National Convention, this summer. I will never forget the excitement that surged through me as I waited in line to get my picture taken with him.
F: He gave the most incredible speech after addressing a sea of Pakistani-American adults an hour before. He insisted on speaking with the children of these adults because he understood the power of a role model. He understood what it means to be Muslim in America right now. As we sat there, I had tears in my eyes — something very rare for me — from sheer happiness. I couldn’t believe that I was finally hearing what I have internalized for so long verbalized and legitimized by a public figure.
N: Donald Trump’s hurtful comments have birthed a new generation of more vocal American Muslims. The current political atmosphere has almost been beneficial in this respect because it has encouraged Muslims to talk back, even though we shouldn’t have to do that to be, you know, respected. American Muslims are craving for more Muslim role models to “come out,” use their public role to combat Islamophobia and represent us in our complex realities. While I sometimes wish we didn’t have to, we really need inspirations with similar backgrounds to spark a larger and more sustained conversation.
F: I guess this is sort of the purpose of this column? Either way, stay tuned to help us figure it out!
|
import numpy as np
from scipy import misc
import glob
from tensorflow.contrib.learn.python.learn.datasets import base
class DataSet(object):
def __init__(self, images, labels):
self._images = images
self._labels = labels
self._num_examples = images.shape[0]
self._epochs_completed = 0
self._index_in_epoch = 0
@property
def images(self):
return self._images
@property
def labels(self):
return self._labels
@property
def num_examples(self):
return self._num_examples
@property
def epochs_completed(self):
return self._epochs_completed
def next_batch(self, batch_size):
"""Return the next `batch_size` examples from this data set."""
start = self._index_in_epoch
self._index_in_epoch += batch_size
if self._index_in_epoch > self._num_examples:
# Finished epoch
self._epochs_completed += 1
# Shuffle the data
perm = np.arange(self._num_examples)
np.random.shuffle(perm)
self._images = self._images[perm]
self._labels = self._labels[perm]
# Start next epoch
start = 0
self._index_in_epoch = batch_size
assert batch_size <= self._num_examples
end = self._index_in_epoch
return self._images[start:end], self._labels[start:end]
def read_data_set(downsample_factor=1):
image_paths = glob.glob("./IMG/*.png")
image_paths.sort()
# [print(i) for i in image_paths]
train_inputs = []
train_targets = []
# load data into train_inputs/targets
for i in range(len(image_paths)-2):
before_target = 255-np.array(misc.imread(image_paths[i]))
target = 255-np.array(misc.imread(image_paths[i+1]))
after_target = 255-np.array(misc.imread(image_paths[i+2]))
if downsample_factor > 1:
before_target = before_target[::downsample_factor,::downsample_factor,:];
target = target[::downsample_factor,::downsample_factor,:];
after_target = after_target[::downsample_factor,::downsample_factor,:];
x = np.concatenate((before_target,after_target),axis = 2)
train_inputs.append(x)
train_targets.append(target)
train_inputs = np.array(train_inputs)
train_targets = np.array(train_targets)
print(train_inputs.shape)
## split into train, test, validation
dataset_size = len(train_inputs)
test_size = int(0.15*(dataset_size))
validation_size = test_size
# shuffle data
perm = np.arange(dataset_size)
np.random.shuffle(perm)
train_inputs = train_inputs[perm]
train_targets = train_targets[perm]
# split
validation_inputs = train_inputs[-validation_size:]
validation_targets = train_targets[-validation_size:]
test_inputs = train_inputs[-(validation_size+test_size):-validation_size]
test_targets = train_targets[-(validation_size+test_size):-validation_size]
train_inputs = train_inputs[:-(validation_size+test_size)]
train_targets = train_targets[:-(validation_size+test_size)]
print('Train size:', train_inputs.shape)
print('Test size:', test_inputs.shape)
print('Validation size:', validation_inputs.shape)
# package as tf.Datasets object and return
train = DataSet(train_inputs, train_targets)
validation = DataSet(validation_inputs, validation_targets)
test = DataSet(test_inputs, test_targets)
return base.Datasets(train=train, validation=validation, test=test)
if __name__ == '__main__':
read_data_set()
|
SARON®Futures: Transitioning the CHF-denominated market smoothly to the new risk free rate.
Following the introduction of the Three-Month SARON®Futures to help the Swiss market transition to a new risk-free rate, Eurex spoke with Martin Bardenhewer, Head of Financial Institutions & Multinationals, Zürcher Kantonalbank and Co-chair of the Swiss National Working Group on Reference Rates (NWG), Pascal Anderegg, Interest Rate Derivatives Trader, Zürcher Kantonalbank, and Michel Erni, Head of Market Rates & Director at Basler Kantonalbank.
How can the SARON Futures help the market transition to the new risk-free rate?
Martin Bardenhewer, Head of Financial Institutions & Multinationals, Zürcher Kantonalbank and Co-chair of the Swiss National Working Group on Reference Rates (NWG): Futures have been the most important instruments for the short end of the LIBOR-Swap curve. We expect the dominance of futures for short tenors to continue, and therefore, SARON Futures are set to be a key instrument for liquidly trading in each segment of the SARON curve.
Michel Erni, Head of Market Rates & Director at Basler Kantonalbank:The input from the market side for a smooth transition becomes much more profound with the launch of SARON Futures, bearing in mind that the SARON IRS market was established not very long ago, in April 2017. The introduction of new SARON Futures completes the base for a new benchmark curve for the risk-free rate.
What are the key challenges for SARON and the SARON Futures?
Martin Bardenhewer: CHF is a small currency compared to most other IBOR currencies. Allocating liquidity to a small number of derivative contracts on SARON is key to a smooth transition away from LIBOR. Only if hedge instruments are liquid, or at least are recognized as due to become liquid in the near future, can interest rate risk in cash products like loans be managed without additional costs. Incentives to keep a LIBOR book will weaken quickly as soon as liquidity shifts from LIBOR derivatives to SARON derivatives: that is why the new SARON Futures fill a gap.
Michel Erni: The market and all associated products have to adapt to a completely new methodology, which means changing from a forward-looking LIBOR to a backward-looking SARON. Some areas affected are, for example, the mortgage business, treasury, and possibly resulting in an increase in derivatives, to reduce this uncertainty to plan interest rate costs in advance.
How important is transparency post-LIBOR and how does SARON respond to the changing regulatory environment?
Martin Bardenhewer: SARON is calculated from transactions and binding quotes on CHF GC repos, by far the most important CHF money market segment. There are hardly any unsecured trades in the CHF money market, since rising capital requirements have made this segment inefficient.
Michel Erni: Because of the financial crisis and LIBOR scandals, regulators focus strongly on transparency, and this is the reason why SARON is based on Repo transactions, in contrast to LIBOR. This transparency – and hence, security – does come with a price: if we look for example at well-known LIBOR-based mortgages, clients do know their interest rate costs for the upcoming period after fixing of LIBOR in advance of this period. With SARON, however, overnight rates have to be compounded and thus clients come to know their final interest rate costs only after the period.
How smooth was the transition readiness to the new risk-free rate?
Pascal Anderegg, Interest Rate Derivatives Trader, Zürcher Kantonalbank: For the time being, SARON swaps, introduced in 2017, have fully replaced CHF Tomorrow-Overnight-Index Swaps. So far, they have not gained significant market share from LIBOR swaps. Without the imminent necessity to switch to SARON, most of the liquidity in the OTC IRS market remains in LIBOR. We assume that many institutions are still in the process of adjusting IT-systems and risk models to be able to handle SARON-based derivatives. By continuing to promote more SARON-based derivatives, the NWG is supporting the CHF market in the enhancing of transition readiness away from LIBOR. In this respect, SARON Futures play an important role, as they allow market participants to move away from LIBOR-based futures for the hedging of short-term interest rate risks. Moreover, the futures, if sufficiently liquid, will be used to strip the short end of the swap curve, making the SARON swap market more liquid and robust.
How does SARON Futures help to efficiently trade CH-denominated ETD & OTC products?
Pascal Anderegg: Having SARON Futures visibly on Central Limit Order Books (CLOBs) will facilitate the pricing of short-term forward starting swaps, which can be used for the hedging of changes in monetary policy rates.
Michel Erni: As there are still big interest rate hedging needs in the market for short- and long-term interest rates, a liquid alternative for CHF-3m-Futures is key for all participants. We now see more and more businesses based on SARON, and both the SARON Futures and SARON IRS help to eliminate basis risk. We expect more transfers from LIBOR-based hedging to SARON in the near future. Being prepared early is very important in our view.
How important are the new SARON Futures to having robust fixed income markets?
Pascal Anderegg: Currently, the fixed income market is marked against LIBOR swaps. As the CHF LIBOR swap market will most likely cease to exist by the end of 2021, it is important to have a robust SARON swap market in place. Having SARON Futures visibly on screens will help build confidence in the SARON swap market. However, it cannot be stressed enough how important it is for market participants to make the necessary adjustments to their systems and models and start relying on the SARON swap market. The launch of the SARON Futures should be the starting signal to switch from LIBOR to SARON.
Michel Erni: In a benchmark curve for robust money and fixed income markets, futures contracts have to be considered as part of the pricing, and improve credibility overall.
|
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements. See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership. The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied. See the License for the
# specific language governing permissions and limitations
# under the License.
# coding: utf-8
# pylint: disable=wildcard-import
"""Gamma Distribution."""
__all__ = ['Gamma']
from .exp_family import ExponentialFamily
from .constraint import Real, Positive
from .utils import getF, sample_n_shape_converter, gammaln, digamma
class Gamma(ExponentialFamily):
r"""Create a Gamma distribution object.
Parameters
----------
shape : Tensor or scalar
shape parameter of the distribution, often represented by `k` or `\alpha`
scale : Tensor or scalar, default 1
scale parameter of the distribution, often represented by `\theta`,
`\theta` = 1 / `\beta`, where `\beta` stands for the rate parameter.
F : mx.ndarray or mx.symbol.numpy._Symbol or None
Variable recording running mode, will be automatically
inferred from parameters if declared None.
"""
# pylint: disable=abstract-method
# TODO: Implement implicit reparameterization gradient for Gamma.
has_grad = False
support = Real()
arg_constraints = {'shape': Positive(), 'scale': Positive()}
def __init__(self, shape, scale=1.0, F=None, validate_args=None):
_F = F if F is not None else getF(shape, scale)
self.shape = shape
self.scale = scale
super(Gamma, self).__init__(
F=_F, event_dim=0, validate_args=validate_args)
def log_prob(self, value):
if self._validate_args:
self._validate_samples(value)
F = self.F
log_fn = F.np.log
lgamma = gammaln(F)
# alpha (concentration)
a = self.shape
# beta (rate)
b = 1 / self.scale
return a * log_fn(b) + (a - 1) * log_fn(value) - b * value - lgamma(a)
def broadcast_to(self, batch_shape):
new_instance = self.__new__(type(self))
F = self.F
new_instance.shape = F.np.broadcast_to(self.shape, batch_shape)
new_instance.scale = F.np.broadcast_to(self.scale, batch_shape)
super(Gamma, new_instance).__init__(F=F,
event_dim=self.event_dim,
validate_args=False)
new_instance._validate_args = self._validate_args
return new_instance
def sample(self, size=None):
return self.F.np.random.gamma(self.shape, 1, size) * self.scale
def sample_n(self, size=None):
return self.F.np.random.gamma(self.shape, 1, sample_n_shape_converter(size)) * self.scale
@property
def mean(self):
return self.shape * self.scale
@property
def variance(self):
return self.shape * (self.scale ** 2)
def entropy(self):
F = self.F
lgamma = gammaln(F)
dgamma = digamma(F)
return (self.shape + F.np.log(self.scale) + lgamma(self.shape) +
(1 - self.shape) * dgamma(self.shape))
@property
def _natural_params(self):
return (self.shape - 1, -1 / self.scale)
|
Companies working in the field of Chip design in India ASIC & VLSI.
Skill: Architecture development, Synthesis, System level simulation, Verification, FPGA and Technology translation. PC standards and Networking.
Products:CF core with IDE for 8/16/32 bit microprocessor interface, CF core with AMBA interface. Models for FIFOs, SDRAM Controller, Various RAM models, Intel Peripherals.
Analog Devices (India) Pvt. Ltd.
Artech Information Systems Pvt. Ltd.
Skill : VHDL/Verilog, ASIC Design, Physical Design, Analog Design, PLL.
Area : Design Services, Communications.
AMD India Engineering Centre Pvt Ltd.
Skill : Cad Support, Physical Design.
Address:Bi SQUARE CONSULTANTS PVT. LTD.
Address:Bitmapper Integration technologies private limited.
Skill : VHDL/Verilog, Synthesis, Architecture, Physical Design.
Skill : Verilog, ASIC Design, RT Synthesis, Architecture, DFT, SoC, DSP Algorithms.
CG-CoreEl Programmable Solutions Pvt. Ltd.
Skill : FPGA Design, VHDL/Verilog.
Cisco Systems (India) Pvt. Ltd.
Skill : Verilog/VHDL, ASIC Design, DFT/BIST, Verification, Physical Design.
Cirrus Logic Software (I) Pvt. Ltd.
Skill : Analog CMOS Design, VLSI Design.
Skill : FPGA Design, VHDL/Verilog, VLSI Design.
Skill : VHDL/Verilog, VLSI Design, Tools for Layout and Verification.
Skill : VHDL/Verilog, VLSI Design, Physical Design.
Skill :VLSI Design, Physical Design, Mixed Analog-Digital Design, Architectures.
Area : Controllers, Consumer Electronics, Transport.
HCL Technologies India Pvt. Ltd.
Skill : VLSI Design, FPGA/ASIC Design, Physical Design, Synthesis.
IBM Global Services India Pvt. Ltd.
Phone : 080-5262355, 5267117, 5269299.
Skill : VHDL, VLSI Design, Physical Design.
Address:ICON Design Automation Pvt. Ltd.
Skill : VHDL/Verilog, VLSI Design.
Skill : VHDL/Verilog, VLSI Design, Analog Design, Mixed Signal, DSP Architectures, RF design, Testing.
Skill : VHDL/Verilog, ASIC Design, SoC, EDA Tools, High-Speed Digital Design.
Skill : Verilog, ASIC Design.
Area : Network Processors, Switching and Routing Equipment.
Skill : VLSI Design, Power Electronics, PCB Design.
Skill :VHDL/Verilog, ASIC Design, Physical Design.
Area : Design Services, ATM, SONET/SDH, Gigabit Ethernet.
Skill : Physical Design, Backend Tools, CAD Tools, C/C++.
NatSem India Designs Pvt. Ltd.
Skill : Physical Design, VHDL/Verilog.
Skill : Physical Design, VHDL Synthesis, Software.
Sanyo LSI Technology India Ltd.
Skill : VLSI Design, VHDL/Verilog.
Skill : VHDL/Verilog, ASIC Design, Physical Design, Logic Synthesis, VLSI Design.
Area : Design Services, Telecommunications, Embedded Systems.
Area:ASIC Design, Development, Fabrication, Packaging and quality assurance AND VLSI EDUCATION under VEDANT Programme.
Skill : Physical Design, CAD Tools.
Phone : 011-8530965, 8556233, 8522810.
FAX : 011-8531091, 8530687, 8530705.
Skill : Physical Design, VHDL Synthesis.
Skill : VHDL Synthesis, CAD Tools.
Phone : 080-5563945, 5364835, 5363956.
Skill : VHDL/Verilog Synthesis, VLSI Design.
Tejas Networks India Pvt. Ltd.
Phone : 080-2267495, 2384712, 2384713, 2384714, 2384715.
Area : Optical Networking, SONET/SDH, DWDM.
Skill : VHDL VLSI Design, CAD Tools,Analog/Mixed Signal ,Digital logic,Physical Design.
NEW DELHI -- 110 029.
Skill : VHDL, ASIC Design.
Vedatech India (Software) Pvt. Ltd.
Address: BANGALORE -- 560 025.
Skill : VHDL/Verilog, ASIC/FPGA Design, Physical Design, SPICE, DSP, RF Design.
Area : Design Services, Telecommunications, Networking.
Skill : VLSI Design, VHDL/Verilog, Physical Design.
Area : SoC, Memory, Embedded Systems, Cores.
|
### extends 'class_empty.py'
### block ClassImports
# NOTICE: Do not edit anything here, it is generated code
from . import gxapi_cy
from geosoft.gxapi import GXContext, float_ref, int_ref, str_ref
### endblock ClassImports
### block Header
# NOTICE: The code generator will not replace the code in this block
### endblock Header
### block ClassImplementation
# NOTICE: Do not edit anything here, it is generated code
class GXMAPL(gxapi_cy.WrapMAPL):
"""
GXMAPL class.
The `GXMAPL <geosoft.gxapi.GXMAPL>` class is the interface with the MAPPLOT program,
which reads a MAPPLOT control file and plots graphical
entities to a map. The `GXMAPL <geosoft.gxapi.GXMAPL>` object is created for a given
control file, then passed to the MAPPLOT program, along
with the target `GXMAP <geosoft.gxapi.GXMAP>` object on which to do the drawing
"""
def __init__(self, handle=0):
super(GXMAPL, self).__init__(GXContext._get_tls_geo(), handle)
@classmethod
def null(cls):
"""
A null (undefined) instance of `GXMAPL <geosoft.gxapi.GXMAPL>`
:returns: A null `GXMAPL <geosoft.gxapi.GXMAPL>`
:rtype: GXMAPL
"""
return GXMAPL()
def is_null(self):
"""
Check if this is a null (undefined) instance
:returns: True if this is a null (undefined) instance, False otherwise.
:rtype: bool
"""
return self._internal_handle() == 0
# Miscellaneous
@classmethod
def create(cls, name, ref_name, line):
"""
Create a `GXMAPL <geosoft.gxapi.GXMAPL>`.
:param name: `GXMAPL <geosoft.gxapi.GXMAPL>` file name
:param ref_name: Map base reference name
:param line: Start line number in file (0 is first)
:type name: str
:type ref_name: str
:type line: int
:returns: `GXMAPL <geosoft.gxapi.GXMAPL>`, aborts if creation fails
:rtype: GXMAPL
.. versionadded:: 5.0
**License:** `Geosoft Open License <https://geosoftgxdev.atlassian.net/wiki/spaces/GD/pages/2359406/License#License-open-lic>`_
**Note:** The default map groups will use the reference name with
"_Data" and "_Base" added. If no reference name is specified,
the name "`GXMAPL <geosoft.gxapi.GXMAPL>`" is used
"""
ret_val = gxapi_cy.WrapMAPL._create(GXContext._get_tls_geo(), name.encode(), ref_name.encode(), line)
return GXMAPL(ret_val)
@classmethod
def create_reg(cls, name, ref_name, line, reg):
"""
Create a `GXMAPL <geosoft.gxapi.GXMAPL>` with `GXREG <geosoft.gxapi.GXREG>`.
:param name: `GXMAPL <geosoft.gxapi.GXMAPL>` file name
:param ref_name: Map base reference name
:param line: Start line number in file (0 is first)
:type name: str
:type ref_name: str
:type line: int
:type reg: GXREG
:returns: `GXMAPL <geosoft.gxapi.GXMAPL>`, aborts if creation fails
:rtype: GXMAPL
.. versionadded:: 5.0
**License:** `Geosoft Open License <https://geosoftgxdev.atlassian.net/wiki/spaces/GD/pages/2359406/License#License-open-lic>`_
**Note:** The default map groups will use the reference name with
"_Data" and "_Base" added. If no reference name is specified,
the name "`GXMAPL <geosoft.gxapi.GXMAPL>`" is used
"""
ret_val = gxapi_cy.WrapMAPL._create_reg(GXContext._get_tls_geo(), name.encode(), ref_name.encode(), line, reg)
return GXMAPL(ret_val)
def process(self, map):
"""
Process a `GXMAPL <geosoft.gxapi.GXMAPL>`
:type map: GXMAP
.. versionadded:: 5.0
**License:** `Geosoft Open License <https://geosoftgxdev.atlassian.net/wiki/spaces/GD/pages/2359406/License#License-open-lic>`_
"""
self._process(map)
def replace_string(self, var, repl):
"""
Adds a replacement string to a mapplot control file.
:param var: Variable
:param repl: Replacement
:type var: str
:type repl: str
.. versionadded:: 5.0
**License:** `Geosoft Open License <https://geosoftgxdev.atlassian.net/wiki/spaces/GD/pages/2359406/License#License-open-lic>`_
"""
self._replace_string(var.encode(), repl.encode())
### endblock ClassImplementation
### block ClassExtend
# NOTICE: The code generator will not replace the code in this block
### endblock ClassExtend
### block Footer
# NOTICE: The code generator will not replace the code in this block
### endblock Footer
|
The whole installation process looks like it works, but then when I go back to the website app that needs it, it prompts me to download Silverlight again. It's not showing up in the list of plugins in the Safari "Preferences > Websites". And I can't find the option anywhere to select to "Allow Always". It also seems like Safari 12 has different menu items. For instance, under "Help" there is no "Installed Plugins" option.
I have exactly the same problem. The Microsoft material looks out of date and the links take you to the wrong places.
Silverlight was working and then stopped after updating to Safari 12. I now have the exact same problem.
Scroll all the way down to the bottom of the page. Adobe Flash is the only exception.
Same here, and I'm pissed. I can't access my stocks now.
I use silverlight for probably 70% of my work software and experienced this exact same problem today, also on 10.12.6. Can't get anything done now.
I've avoided upgrading to High Sierra specifically worried that I'd lose the ability to use Silverlight. Now I've found out that Firefox has also discontinued support for Silverlight.
Has anyone found a workaround yet? I can't get Chrome to work with it, either.
You can use a german browser called ICAB. Newest version is 5.85.5 and it works fine on Mac OS 10.13.6 with Silverlight. It cost 10usd.
You will be happy again as myself.
Silverlight does not work with Safari 12. Apple has confirmed this. But, you can find the "Plugins" tab under Safari/Preferences/Websites, on the lower left hand side of the box. Unfortunately, that won't help you with this problem.
Then i go to finder, applications i click with the right click of the mouse, get info, and select the option app nap.
When ever I go to open Seamonkey... even after putting it to nap... it doesn't want to open.
|
"""
SleekXMPP: The Sleek XMPP Library
Copyright (C) 2010 Nathanael C. Fritz
This file is part of SleekXMPP.
See the file LICENSE for copying permission.
"""
import logging
from sleekxmpp.stanza.iq import Iq
from sleekxmpp.xmlstream import register_stanza_plugin
from sleekxmpp.xmlstream.handler import Callback
from sleekxmpp.xmlstream.matcher import StanzaPath
from sleekxmpp.plugins import BasePlugin
from sleekxmpp.plugins import xep_0082
from sleekxmpp.plugins.xep_0202 import stanza
log = logging.getLogger(__name__)
class XEP_0202(BasePlugin):
"""
XEP-0202: Entity Time
"""
name = 'xep_0202'
description = 'XEP-0202: Entity Time'
dependencies = set(['xep_0030', 'xep_0082'])
stanza = stanza
default_config = {
#: As a default, respond to time requests with the
#: local time returned by XEP-0082. However, a
#: custom function can be supplied which accepts
#: the JID of the entity to query for the time.
'local_time': None,
'tz_offset': 0
}
def plugin_init(self):
"""Start the XEP-0203 plugin."""
if not self.local_time:
def default_local_time(jid):
return xep_0082.datetime(offset=self.tz_offset)
self.local_time = default_local_time
self.xmpp.register_handler(
Callback('Entity Time',
StanzaPath('iq/entity_time'),
self._handle_time_request))
register_stanza_plugin(Iq, stanza.EntityTime)
def plugin_end(self):
self.xmpp['xep_0030'].del_feature(feature='urn:xmpp:time')
self.xmpp.remove_handler('Entity Time')
def session_bind(self, jid):
self.xmpp['xep_0030'].add_feature('urn:xmpp:time')
def _handle_time_request(self, iq):
"""
Respond to a request for the local time.
The time is taken from self.local_time(), which may be replaced
during plugin configuration with a function that maps JIDs to
times.
Arguments:
iq -- The Iq time request stanza.
"""
iq.reply()
iq['entity_time']['time'] = self.local_time(iq['to'])
iq.send()
def get_entity_time(self, to, ifrom=None, **iqargs):
"""
Request the time from another entity.
Arguments:
to -- JID of the entity to query.
ifrom -- Specifiy the sender's JID.
block -- If true, block and wait for the stanzas' reply.
timeout -- The time in seconds to block while waiting for
a reply. If None, then wait indefinitely.
callback -- Optional callback to execute when a reply is
received instead of blocking and waiting for
the reply.
"""
iq = self.xmpp.Iq()
iq['type'] = 'get'
iq['to'] = to
iq['from'] = ifrom
iq.enable('entity_time')
return iq.send(**iqargs)
|
Yesterday grim footage apparently showing a British engineer kidnapped from a house in Baghdad last week along with two American colleagues surfaced in a video released in the Iraqi capital. The group holding the three threatened to execute them unless Iraqi women prisoners are released from jail.
And last night it was reported that 10 more staff working for an American-Turkish company had been seized as hostages.
There are now fears that scheduled Iraqi elections in January will have to be delayed because of the growing instability.
Last week Geoff Hoon, the Defence Secretary, said that more troops could be sent to safeguard the polls if necessary, although Whitehall sources said there was no guarantee that they would be British.
The reduction will take place when the First Mechanised Infantry Brigade is replaced by the Fourth Armoured Division, now based in Germany, in a routine rotation over the next few weeks.
Currently there are 8,000 British troops in the 14,000-strong 'multinational division' in southern Iraq, which has responsibility for about 4.5 million people.
Iyad Allawi, the interim Iraqi Prime Minister, will hold talks with Tony Blair at Chequers tomorrow on security issues, including elections and the strengthening of border patrols.
News of the troop withdrawal comes at a difficult time for Blair, with the publication yesterday of leaked documents suggesting that he was warned a year before the invasion that it could prompt a meltdown.
However Tessa Jowell, the Culture Secretary and a close ally of Blair, told The Observer that the Prime Minister still believed that Britain's actions would be justified by the restoration of democracy 'however difficult and remote a prospect that seems at the moment, when our headlines are crowded with further attacks by the insurgents'.
In another embarrassment for the Prime Minister, a draft report from the Iraqi Survey Group, set up to investigate Saddam Hussein's weapons programme, has concluded that the former dictator's only chemical or biological armament was a small amount of poison for use in political killings.
|
import unittest
from kindergarten_garden import Garden
# Tests adapted from `problem-specifications//canonical-data.json` @ v1.0.0
class KindergartenGardenTests(unittest.TestCase):
def test_garden_with_single_student(self):
self.assertEqual(
Garden("RC\nGG").plants("Alice"),
"Radishes Clover Grass Grass".split())
def test_different_garden_with_single_student(self):
self.assertEqual(
Garden("VC\nRC").plants("Alice"),
"Violets Clover Radishes Clover".split())
def test_garden_with_two_students(self):
garden = Garden("VVCG\nVVRC")
self.assertEqual(
garden.plants("Bob"), "Clover Grass Radishes Clover".split())
def test_multiple_students_for_the_same_garden_with_three_students(self):
garden = Garden("VVCCGG\nVVCCGG")
self.assertEqual(garden.plants("Bob"), ["Clover"] * 4)
self.assertEqual(garden.plants("Charlie"), ["Grass"] * 4)
def test_full_garden(self):
garden = Garden("VRCGVVRVCGGCCGVRGCVCGCGV\nVRCCCGCRRGVCGCRVVCVGCGCV")
self.assertEqual(
garden.plants("Alice"),
"Violets Radishes Violets Radishes".split())
self.assertEqual(
garden.plants("Bob"), "Clover Grass Clover Clover".split())
self.assertEqual(
garden.plants("Kincaid"), "Grass Clover Clover Grass".split())
self.assertEqual(
garden.plants("Larry"), "Grass Violets Clover Violets".split())
# Additional tests for this track
def test_disordered_test(self):
garden = Garden(
"VCRRGVRG\nRVGCCGCV",
students="Samantha Patricia Xander Roger".split())
self.assertEqual(
garden.plants("Patricia"),
"Violets Clover Radishes Violets".split())
self.assertEqual(
garden.plants("Xander"), "Radishes Grass Clover Violets".split())
if __name__ == '__main__':
unittest.main()
|
Martin R. Huberty was Associate Irrigation Engineer in the Experiment Station; C. N. Johnston was Assistant Irrigation Engineer in the Experiment Station.
Hilgardia 14(3):119-146. DOI:10.3733/hilg.v14n03p119. October 1941.
The deep alluvial fill of Putah Creek forms a storage basin from which much irrigation water is pumped. Continued and expanding demand upon the underground water supply has caused a gradual recession in the water plane. This condition leads many farmers to question the permanency of their water supply.
The College of Agriculture at Davis is located within the basin of Putah Creek. In years of low rainfall, the underground basin is its sole source of water supply. Hence, since the early days of the institution, the Division of Irrigation has observed underground water conditions on the University Farm.
The deficient rainfall of the winter of 1930-31 emphasized the need for a comprehensive study of the water supply in the Putah Creek area. Although Bryan (2)5 studied the basin in 1912, conditions have changed materially since that date. An informal project, outlining a study to supplement existing information on the water supply and the pumping conditions of the area, was formulated by the divisions of Agricultural Engineering, Chemistry, and Irrigation.
[2.] Bryan Kirk. Ground water for irrigation in the Sacramento Valley, California. U. S. Dept. Interior, Geol. Survey, Water-Supply Paper. 1915. 375:1-49.
[3.] Gardner Willard, Collier T. R., Farr Doris. Groundwater. Part I. Fundamental principles governing its physical control. Utah Agr. Exp. Sta. Bul. 1934. 252:1-40.
[4.] Israelsen O. W., McLaughlin W. W. Drainage of land overlying an artesian groundwater reservoir. Utah Agr. Exp. Sta. Bul. 1935. 259:1-32.
[5.] Slichter C. S. Field measurements of the rate of movement of underground waters. U. S. Dept. Interior, Geol. Survey, Water-Supply Paper. 1905. 140:1-122.
|
#!/usr/bin/env python
################
# see notes at bottom for requirements
from __future__ import absolute_import, print_function
import glob
import os
import sys
from sys import platform
from distutils.core import setup
from pkg_resources import parse_version
# import versioneer
import psychopy
version = psychopy.__version__
# regenerate __init__.py only if we're in the source repos (not in a zip file)
try:
import createInitFile # won't exist in a sdist.zip
writeNewInit=True
except:
writeNewInit=False
if writeNewInit:
vStr = createInitFile.createInitFile(dist='bdist')
#define the extensions to compile if necess
packageData = []
requires = []
if platform != 'darwin':
raise RuntimeError("setupApp.py is only for building Mac Standalone bundle")
import bdist_mpkg
import py2app
resources = glob.glob('psychopy/app/Resources/*')
resources.append('/Library/Frameworks/Python.framework/Versions/2.7/include/python2.7/pyconfig.h')
frameworks = ["libavbin.dylib", "/usr/lib/libxml2.2.dylib", #"libyaml.dylib",
"libevent.dylib", "libffi.dylib",
"libmp3lame.0.dylib",
"/usr/local/Cellar/glfw/3.2.1/lib/libglfw.3.2.dylib",
]
opencvLibs = glob.glob(os.path.join(sys.exec_prefix, 'lib', 'libopencv*.2.4.dylib'))
frameworks.extend(opencvLibs)
import macholib
#print("~"*60 + "macholib version: "+macholib.__version__)
if parse_version(macholib.__version__) <= parse_version('1.7'):
print("Applying macholib patch...")
import macholib.dyld
import macholib.MachOGraph
dyld_find_1_7 = macholib.dyld.dyld_find
def dyld_find(name, loader=None, **kwargs):
#print("~"*60 + "calling alternate dyld_find")
if loader is not None:
kwargs['loader_path'] = loader
return dyld_find_1_7(name, **kwargs)
macholib.MachOGraph.dyld_find = dyld_find
includes = ['Tkinter', 'tkFileDialog',
'imp', 'subprocess', 'shlex',
'shelve', # for scipy.io
'_elementtree', 'pyexpat', # for openpyxl
'hid',
'pyo', 'greenlet', 'zmq', 'tornado',
'psutil', # for iohub
'pysoundcard', 'soundfile', 'sounddevice',
'cv2', 'hid',
'xlwt', # writes excel files for pandas
'vlc', # install with pip install python-vlc
'msgpack_numpy',
'configparser',
]
packages = ['wx', 'psychopy',
'pyglet', 'pygame', 'pytz', 'OpenGL', 'glfw',
'scipy', 'matplotlib', 'lxml', 'xml', 'openpyxl',
'moviepy', 'imageio',
'_sounddevice_data','_soundfile_data',
'cffi','pycparser',
'PIL', # 'Image',
'objc', 'Quartz', 'AppKit', 'QTKit', 'Cocoa',
'Foundation', 'CoreFoundation',
'pkg_resources', # needed for objc
'pyolib',
'requests', 'certifi', 'cryptography',
'pyosf',
# for unit testing
'coverage',
# handy external science libs
'serial',
'egi', 'pylink',
'pyxid',
'pandas', 'tables', # 'cython',
'msgpack', 'yaml', 'gevent', # for ioHub
# these aren't needed, but liked
'psychopy_ext', 'pyfilesec',
'bidi', 'arabic_reshaper', # for right-left language conversions
# for Py3 compatibility
'future', 'past', 'lib2to3',
'json_tricks', # allows saving arrays/dates in json
'git', 'gitlab',
'astunparse', 'esprima', # for translating/adapting py/JS
'pylsl', 'pygaze',
]
if sys.version_info.major >= 3:
packages.extend(['PyQt5'])
else:
# not available or not working under Python3:
includes.extend(['UserString', 'ioLabs', 'FileDialog'])
packages.extend(['PyQt4', 'labjack', 'rusocsci'])
# is available but py2app can't seem to find it:
packages.extend(['OpenGL'])
setup(
app=['psychopy/app/psychopyApp.py'],
options=dict(py2app=dict(
includes=includes,
packages=packages,
excludes=['bsddb', 'jinja2', 'IPython','ipython_genutils','nbconvert',
'libsz.2.dylib',
# 'stringprep',
'functools32',
], # anything we need to forcibly exclude?
resources=resources,
argv_emulation=True,
site_packages=True,
frameworks=frameworks,
iconfile='psychopy/app/Resources/psychopy.icns',
plist=dict(
CFBundleIconFile='psychopy.icns',
CFBundleName = "PsychoPy3",
CFBundleShortVersionString = version, # must be in X.X.X format
CFBundleGetInfoString = "PsychoPy3 "+version,
CFBundleExecutable = "PsychoPy3",
CFBundleIdentifier = "org.psychopy.PsychoPy3",
CFBundleLicense = "GNU GPLv3+",
CFBundleDocumentTypes=[dict(CFBundleTypeExtensions=['*'],
CFBundleTypeRole='Editor')],
LSEnvironment=dict(PATH="/usr/local/git/bin:/usr/local/bin:"
"/usr/local:/usr/bin:/usr/sbin"),
),
)) # end of the options dict
)
# ugly hack for opencv2:
# As of opencv 2.4.5 the cv2.so binary used rpath to a fixed
# location to find libs and even more annoyingly it then appended
# 'lib' to the rpath as well. These were fine for the packaged
# framework python but the libs in an app bundle are different.
# So, create symlinks so they appear in the same place as in framework python
rpath = "dist/PsychoPy3.app/Contents/Resources/"
for libPath in opencvLibs:
libname = os.path.split(libPath)[-1]
realPath = "../../Frameworks/"+libname # relative path (w.r.t. the fake)
fakePath = os.path.join(rpath, "lib", libname)
os.symlink(realPath, fakePath)
# they even did this for Python lib itself, which is in diff location
realPath = "../Frameworks/Python.framework/Python" # relative to the fake path
fakePath = os.path.join(rpath, "Python")
os.symlink(realPath, fakePath)
if writeNewInit:
# remove unwanted info about this system post-build
createInitFile.createInitFile(dist=None)
# running testApp from within the app raises wx errors
# shutil.rmtree("dist/PsychoPy3.app/Contents/Resources/lib/python2.6/psychopy/tests/testTheApp")
|
The It’s-It company began selling scoops of vanilla ice cream sandwiched between old-fashioned oatmeal cookies cloaked in chocolate at San Francisco’s Playland at the Beach in 1928. By the time I moved to the area in 1978, the It’s –It was a well-established local phenomenon. I’ve filled my version of the novelty with a not-overly-sweet vanilla frozen custard.
Don’t be put off by the recipe’s many steps: Each is reasonably quick and easy, and one bite will convince you it was all worthwhile. I half-dip the sandwiches for a pretty finish. If you wish to fully dip them, double the Chocolate Shell recipe.
Whisk the yolks with 1 tablespoon of the sugar in a medium bowl until smooth and slightly thickened. Set aside.
Whisk 1/2 cup (120 ml) of the milk with the syrup, tapioca, salt, and the remaining 1/3 cup (67 g) sugar in a medium saucepan until no lumps remain. Stir in the cream and the remaining 1 cup (340 ml) milk. Use a paring knife to scrape in the seeds from the vanilla bean; toss in the pod. Heat the mixture over medium-high heat, stirring with a heatproof spatula, until it begins to steam and slightly bubble at the edges.
Ladle 1 cup (240 ml) of the hot mixture into the yolks in a stream as you whisk the mixture to prevent the yolks from scrambling. Whisk the mixture back into the saucepan and cook at a slow simmer, stirring with a spatula, until the mixture thickens enough to thickly coat a spoon, 1 to 2 minutes longer.
Strain the mixture through a fine-mesh strainer into a metal bowl. (Rinse and save the pod for another use, or discard.) Stir in the vanilla extract.
Preheat the oven to 350°F (175°C) with racks in the upper and lower thirds of the oven. Line two baking sheets with parchment paper or silicone baking mats. Whisk together the flour, baking soda, baking powder, and salt in a small bowl.
Put the butter and brown sugar in a mixing bowl and use a wooden spoon or a handheld electric mixer to mix until they are creamy. Add the egg and vanilla and mix until smooth. Stir in the flour mixture just until everything is well combined, then stir in the oats and raisins, if using.
Spoon or scoop the batter in tablespoons onto the prepared baking sheets, spacing them evenly to make 24 cookies. Press the cookies with lightly dampened fingers to flatten them slightly – they will spread further as they bake. Bake until the cookies are light golden around the edges, 8 to 10 minutes, rotating the pans top to bottom and front to back halfway through baking.
Let the cookies cool on the pan for 5 minutes, then transfer them directly to a wire rack to cool completely, sliding a spatula under them if they do not release easily.
4. Freeze until firm, at least 2 hours, before dipping. Dip the sandwiches to coat them halfway in chocolate following the instructions below.
Makes enough to half-dip about 12 sandwiches or fully coat 6 to 8 sandwiches.
This chocolate coating could not be easier. A bit of oil keeps the chocolate smooth for dipping, then magically helps it to harden to a brittle coating in only a few minutes in the freezer. The elegant chocolate layer shatters beautifully when you bite into the sandwich, melting smoothly on the tongue.
I prefer an extra-bittersweet chocolate for this – 64 to 72 percent cacao – but choose whatever chocolate you like, from milk to dark. For a white chocolate shell, increase the oil to 1/4 cup (60 ml), and choose a good-quality white chocolate, such as El Rey, Ghirardelli, or Guittard (not chips).
The sandwiches should be firmly frozen before dipping. To decorate the sandwiches, press chopped and toasted nuts coconut, sprinkles, or other decorations onto the dipped sandwiches before the chocolate firms. Any remaining Chocolate Shell can be melted and served over ice cream, or can be stirred into ice cream as it spins for a stracciatella (chocolate chip) effect.
Melt the chocolate with the oil in a double boiler or bowl placed over, but not touching, about an inch (2-1/2 cm) of simmering water in a saucepan. Alternatively, melt the chocolate with the oil in the microwave until you can stir it smooth. You do not need to get the chocolate very hot – just warm enough to melt when you stir it. Set aside until the chocolate is just barely warm and still smooth and fluid.
Dunk a sandwich into the chocolate to coat half or all of the sandwich, using a small offset spatula as an aid to paint on the chocolate and scrape off any excess. Transfer the sandwich to the baking sheet in the freezer. Repeat to coat the remaining sandwiches. Freeze until the chocolate sets, about 15 minutes, before individually wrapping the sandwiches or layering them between sheets of parchment or waxed paper in an airtight container.
Substitute oatmeal cookies and vanilla ice cream.
Immediately after dipping each sandwich, dunk the soft chocolate into a bowl of chopped toasted nuts.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.